process management and scheduling
process management and scheduling
process management and scheduling
• A user can run multiple programs simultaneously on a system (e.g., word processor, web
browser, email client).
• Even systems without multitasking (like embedded devices) require processes for internal
activities (e.g., memory management).
Definition of Processes:
• All user and system activities that need execution are collectively called processes.
Terminology:
• Historically, the term "job" was more common, as operating systems focused on job
processing.
• Despite the preference for "process," terms like job scheduling remain widely accepted due
to historical context.
Definition of a Process:
o Process Stack: Stores temporary data (e.g., function parameters, return addresses,
local variables).
Creating a Process:
o Entering the program name in the command line (e.g., prog.exe or java Program).
Multiple Processes:
• Different processes can originate from the same program but function independently (e.g.,
multiple users running a mail program).
• Processes may share the text section but have separate data, heap, and stack sections.
• Processes can serve as environments for other code (e.g., the Java Virtual Machine (JVM)).
o The JVM runs as a process and executes Java programs within it.
o Example: The command java Program runs the JVM process, which interprets and
executes the Program.class file.
• Unlike traditional simulations (targeting different instruction sets), the JVM interprets code
written in Java.
State Variations:
• The names and specific states can vary across operating systems, but the concepts are
universal.
• Some operating systems use more granular distinctions for process states.
Processor Limitation:
• At any given time, only one process can run on a single processor.
• A PCB (or Task Control Block) is a data structure in the operating system used to represent
each process.
Contents of a PCB:
• Process State: Current state of the process (e.g., new, ready, running, waiting, terminated).
• Memory-Management Information: Includes data like base and limit registers, page tables,
or segment tables, depending on the memory system.
• Accounting Information: Tracks CPU usage, real-time usage, time limits, account and process
numbers, etc.
• I/O Status Information: Lists allocated I/O devices and open files.
• It ensures that the operating system can manage and resume processes correctly.
• Multiprogramming: Ensures the CPU always has a process to execute, maximizing utilization.
• Time Sharing: Switches the CPU between processes frequently, enabling user interaction
with programs.
Queue Representation:
Job Queue:
Ready Queue:
• Holds processes in main memory that are ready and waiting for CPU execution.
• Structure:
Device Queues:
• Processes wait here if the device is busy with another process's I/O request.
Process Flow:
o Move to the ready queue when they are ready for execution.
o Transition to device queues if they request I/O and need to wait for resource
availability.
Process Initialization:
• A new process is placed in the ready queue and waits for CPU allocation (dispatch).
o I/O Request: Process is moved to an I/O queue to wait for resource availability.
o Child Creation: Process creates a new child process and waits for its termination.
o Interrupt: Process is forcibly removed from the CPU and re-enters the ready queue.
State Transitions:
• In the first two cases, the process moves from the waiting state back to the ready state and
re-enters the ready queue.
Process Termination:
• Multiprogramming: Aims to have a process running at all times to maximize CPU utilization.
• Time Sharing: Switches the CPU between processes frequently so that users can interact
with each program while it's running.
Process Scheduling:
• The process scheduler selects an available process from the set of processes for execution
on the CPU.
o If more processes exist, the rest must wait until the CPU is available for rescheduling.
Schedulers:
o Selects from the ready queue and allocates the CPU to a process.
• Short-term scheduling is fast because it must decide quickly which process gets the CPU. A
long delay (e.g., 10 ms) would waste CPU time by reducing the efficiency of process
execution (e.g., 9% wasted for every 100 ms of process execution)
• Typically invoked when a process leaves the system to maintain a stable degree of
multiprogramming.
• Ensures the process creation rate equals the process departure rate to stabilize system load.
• The long-term scheduler has more time to make decisions due to infrequent executions.
o If all processes are CPU-bound, the I/O waiting queue may remain empty,
underutilizing I/O devices.
• Optimal performance is achieved when both CPU and I/O devices are utilized effectively.
• In time-sharing systems like UNIX or Windows, the long-term scheduler may be absent or
minimal.
• All new processes are placed directly into memory for the short-term scheduler to manage.
Medium-Term Scheduler:
Swapping:
• Later, the process is swapped back in, resuming execution where it left off.
• Benefits of Swapping:
o Adjusts the process mix to better balance I/O-bound and CPU-bound processes.
Use Case:
• Swapping is especially useful in systems where memory resources are limited, or when
improving system performance requires changes to the active process set.
Interrupts:
• Saving and restoring the context ensures the process can resume correctly after being
interrupted.
Context Switching:
• Context-switching time is pure overhead because no useful work is performed during the
switch.
o Memory speed.
A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a later
time.
When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After
this, the state for the process to run next is loaded from its own PCB and used to set
the PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must
Process operations
Process Identification:
• Most operating systems (e.g., UNIX, Linux, Windows) assign a unique process identifier (pid)
to each process.
• The pid is typically an integer and is used to access a process’s attributes within the kernel.
• The init process (pid = 1) is the root parent process for all user processes.
• Init creates:
o kthreadd (child of init) creates kernel-related processes like khelper and pdflush.
o login manages user logins and can spawn processes like bash.
• A user logs in, initiates the bash shell (e.g., pid 8416).
• From bash, the user starts programs like ps and emacs, each becoming a child process of
bash.
• A child process requires resources like CPU time, memory, files, and I/O devices to function.
• Resource allocation:
o The child can obtain resources directly from the operating system.
• Resource constraints prevent system overload due to excessive child process creation.
• Example Workflow:
o The child opens the file and outputs its content to the terminal or another device.
• Example:
o A parent passes open file descriptors (e.g., for image.jpg and terminal output).
o The child merely transfers data between the file and the output device, streamlining
the operation.
Execution Possibilities:
o The parent process continues running while its child executes independently.
o The parent halts execution and waits for one or more of its children to terminate.
Address-Space Possibilities:
• Child as a Duplicate:
o The child process has the same program and data as the parent.
o The child process loads an entirely new program into its address space.
o The fork() system call creates a child process with an exact copy of the parent’s
address space.
• Execution Continuation:
o Both the parent and the child begin execution at the next instruction after the fork()
call.
• Return Codes:
o Child Process: fork() returns 0 to the newly created process.
• After a fork() system call, the new child process is created as an exact copy of the parent.
• Typically, the child process uses the exec() system call to replace its memory space with a
new program.
Process Behavior:
o Use the wait() system call to pause until a child process terminates.
o Continue running the same program as the parent (without invoking exec()).
• Child Process:
1. Executes the program /bin/ls (or another binary) via the execlp() system call.
• Parent Process:
Alternative Scenario:
• In this case, both parent and child execute the same program instructions concurrently.
• Each process has its own independent copy of data, even though they share the same code
base.
CreateProcess() Overview:
• The CreateProcess() function in the Windows API is used to create a new process.
• Unlike fork(), which creates a child process inheriting the parent's address space:
o CreateProcess() requires loading a specific program into the new process's address
space during its creation.
Parameters of CreateProcess():
• The CreateProcess() function can be used to create and launch applications, such as
mspaint.exe.
• Steps:
1. Initialize Structures:
2. Specify Application:
▪ Provide the application name or set it to NULL and specify the program in
the command-line parameter.
3. Default Options:
▪ Use default values for parameters like environment inheritance and creation
flags.
4. Launch Process:
Parent-Child Synchronization:
• The parent process can wait for the child to complete using:
o WaitForSingleObject(): Takes the child's process handle (pi.hProcess) and waits until
the child process terminates.
• fork():
• CreateProcess():
Illustration:
• Example Flow:
Normal Termination:
A process terminates by executing its final statement and invoking the exit() system call.
The status value (often an integer) is returned to the parent process via the wait() system call.
A process can terminate another process using system calls (e.g., TerminateProcess() in Windows).
Typically, only a parent process can terminate its child processes to prevent unauthorized
termination by other users.
Resource Overuse:
The child process exceeds its allocated resource limits (e.g., memory or CPU time).
Task Completion:
Parent Exit:
In some systems, if a parent process exits, its child processes must also terminate.
When a parent creates a child process, the child's process ID (PID) is passed to the parent.
1. Cascading Termination:
o Some systems do not allow a child process to continue running if its parent process
terminates.
o When the parent terminates, the operating system automatically terminates all its
child processes.
o Processes terminate by calling exit(), which can provide an exit status to the parent
process.
exit(1);
o Indirect termination can also occur via a return statement in the main() function.
▪ Determine which child process has terminated, as wait() returns the child's
PID.
pid_t pid;
int status;
pid = wait(&status);
4. Zombie Processes:
o When a child process terminates, its resources are deallocated, but its entry in the
process table remains until the parent calls wait().
o A terminated process waiting for its parent to call wait() is referred to as a zombie
process.
5. Orphan Processes:
o If a parent process terminates without invoking wait(), its child processes become
orphans.
o In Linux and UNIX, orphan processes are adopted by the init process (PID 1).
Definition:
Processes in an operating system can run independently or cooperatively.
• Independent Processes:
• Cooperating Processes:
1. Information Sharing:
o Multiple users may need concurrent access to shared data (e.g., shared files).
2. Computation Speedup:
3. Modularity:
4. Convenience:
o Users often perform multiple tasks simultaneously (e.g., editing documents, listening
to music, compiling code).
Cooperating processes require an IPC mechanism to exchange data and information. Two
fundamental IPC models are:
• Definition:
• Advantages:
o Efficiency: Once established, shared memory behaves like routine memory access.
• Challenges:
o Cache Coherency Issues: On systems with multiple cores, inconsistencies can arise
due to data being stored in multiple caches.
2. Message-Passing Model
• Definition:
• Advantages:
o Suitability for Multicore Systems: Avoids cache coherency problems, making it more
efficient on modern multicore processors.
• Challenges:
o Performance: Slower than shared memory due to the overhead of system calls and
kernel intervention.
Cooperating processes require an IPC mechanism to exchange data and information. Two
fundamental IPC models are:
• Definition:
• Advantages:
o Efficiency: Once established, shared memory behaves like routine memory access.
• Challenges:
o Cache Coherency Issues: On systems with multiple cores, inconsistencies can arise
due to data being stored in multiple caches.
2. Message-Passing Model
• Definition:
o Suitability for Multicore Systems: Avoids cache coherency problems, making it more
efficient on modern multicore processors.
• Challenges:
o Performance: Slower than shared memory due to the overhead of system calls and
kernel intervention.
Key Comparison
o Other processes attach this shared region to their address space to communicate.
• Process Responsibility:
o Communicating processes decide the data structure and location in shared memory.
o They must also ensure synchronization to prevent simultaneous writes, which can
lead to data corruption.
Producer–Consumer Problem
Definition:
A common scenario in shared memory IPC, where one process (the producer) generates data that
another process (the consumer) uses.
Examples:
2. Client–Server Paradigm:
Key Considerations:
1. Data Consistency:
2. Buffer Management:
o The producer writes data to a shared buffer, and the consumer reads it.
o A mechanism is needed to ensure the buffer doesn’t overflow (producer too fast) or
underflow (consumer too fast).
Key Concepts
Buffer Implementation:
A circular array of fixed size (BUFFER_SIZE) is used to hold items produced by the producer and
consumed by the consumer.
in: Points to the next available position for the producer to place an item.
Buffer States:
typedef struct {
} item;
The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to
wait for new items, but the producer can always produce new items. The bounded buffer assumes a
fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must
wait if the buffer is full
Synchronization Techniques
1. Semaphores:
Producer Process
while (true) {
Consumer process
item next_consumed;
while (true) {
Update out: After retrieving the item, the out pointer is incremented circularly.
Busy Waiting:
• Both the producer and consumer use busy-waiting to handle full and empty conditions,
which is inefficient.
Race Conditions:
• If the producer and consumer access the buffer simultaneously, inconsistencies may occur.
Message passing allows processes to communicate and synchronize their actions without sharing
memory. It is essential in distributed systems, where processes may reside on different computers
connected via a network.
1. Send: send(message)
2. Receive: receive(message)
Communication Link
• The logical implementation of this link determines how messages are exchanged.
• Direct Communication:
o Properties:
• Indirect Communication:
o Properties:
• Synchronous (Blocking):
• Automatic Buffering:
• Explicit Buffering:
o Developers define how much space is allocated for messages and how messages are
queued.
o Provides more control but requires careful management to avoid buffer overflows.
Naming is crucial for processes to identify one another during communication. Two primary
approaches are used: direct communication and indirect communication.
Direct Communication
In direct communication, processes explicitly name each other when sending or receiving messages.
The primitives for this are:
2. Pairwise Link:
Each link is associated with exactly two processes.
Symmetry in Addressing:
• Both the sender and the receiver must explicitly name the other process for communication.
• Example:
Asymmetry in Addressing:
• The receiver does not need to know the sender's identity beforehand.
2. receive(id, message): Receives a message from any process. The variable id captures the
sender's identity.
• Symmetric Communication: Used in tightly coupled systems where both processes are
aware of each other.
Indirect Communication
Indirect communication introduces a level of indirection by using mailboxes (or ports) for message
exchange. This approach improves modularity compared to direct communication schemes.
How It Works:
• Each mailbox is identified by a unique identifier (e.g., an integer in POSIX message queues).
1. Link Establishment:
A communication link is established between processes only if they share a common
mailbox.
2. Multi-process Communication:
A mailbox can facilitate communication between more than two processes.
3. Multiple Links:
Multiple mailboxes can exist between a pair of processes, enabling multiple communication
channels.
When processes P1P1P1, P2P2P2, and P3P3P3 share a common mailbox AAA, and P1P1P1 sends a
message to AAA while P2P2P2 and P3P3P3 execute a receive() operation, the behavior depends on
the message-passing method implemented. The options are as follows:
o Only two processes (e.g., P1P1P1 and P2P2P2) can communicate via a mailbox at a
time.
o In this case, P2P2P2 would receive the message, and P3P3P3 would be excluded
from accessing the mailbox.
o Whichever process (either P2P2P2 or P3P3P3) gains access first will receive the
message. The other process must wait.
o Only one of P2P2P2 or P3P3P3 will receive the message, and the system may inform
P1P1P1 about which process received it.
Ownership of Mailboxes:
o The owner process can only receive messages, while other user processes can only
send messages.
o Ownership resolves ambiguity because only the owner can perform receive().
o If the owner process terminates, the mailbox is destroyed, and any further attempts
to send messages result in an error.
The synchronization between processes using message passing depends on whether the send() and
receive() primitives are blocking or nonblocking. These variations influence how processes interact
and whether they must wait for messages to be sent or received.
1. Blocking Send:
o The sender waits until the message is delivered to the receiver or mailbox.
o Ensures the sender knows when the message has been processed.
o Used in tightly coupled systems where the sender must confirm message delivery.
2. Nonblocking Send:
o The sender immediately resumes its operations after sending the message.
o Allows for greater concurrency, as the sender doesn’t wait for confirmation.
3. Blocking Receive:
4. Nonblocking Receive:
o Both sender and receiver wait for each other, creating a synchronized point of
communication.
o Example: In a producer-consumer system, the producer waits for the consumer to
receive the message, and the consumer waits for the producer to send it.
o The sender waits until the message is delivered, but the receiver does not wait for
the message to be available.
o Useful when message delivery is critical but immediate processing by the receiver is
not.
o The sender sends the message and continues execution, while the receiver waits for
the message to arrive.
o Useful in systems where the sender can perform other tasks after sending data.
o Neither the sender nor the receiver waits. Messages are exchanged asynchronously.
Buffering
• Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
• Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If the
queue is not full when a new message is sent, the message is placed in the queue (either the
message is copied or a pointer to the message is kept), and the sender can continue execution
without waiting. The link’s capacity is finite, however. If the link is full, the sender must block until
space is available in the queue.
• Unbounded capacity. The queue’s length is potentially infinite; thus, any number of messages can
wait in it. The sender never blocks. The zero-capacity case is sometimes referred to as a message
system with no buffering. The other cases are referred to as systems with automatic buffering.
1. Responsiveness
• Example: A user interface remains active and responsive while a time-consuming task, such
as data processing or file loading, runs in a separate thread.
• Advantage: This prevents the application from appearing frozen, improving the user
experience.
2. Resource Sharing
Threads inherently share the resources of their parent process, such as memory and open files.
• Unlike processes, which require explicit mechanisms like shared memory or message
passing, threads can access the same address space seamlessly.
3. Economy
• Process creation involves allocating new memory and resources, which is computationally
expensive.
4. Scalability
• Parallel Execution: Threads can run on separate cores, enabling true parallelism.
Multithreading Models
Thread support can exist at both the user level and the kernel level. How user threads interact with
kernel threads depends on the model implemented. The three main models are:
1. Many-to-One Model
Characteristics:
Thread management occurs in user space using a thread library, making it efficient.
Limitation: If one thread performs a blocking system call, the entire process blocks because only one
kernel thread is available.
Only one thread can access the kernel at a time, preventing parallel execution on multicore systems.
Example:
Early Green threads (used in Solaris and early versions of Java) implemented this model.
2. One-to-One Model
The One-to-One model maps each user thread to a kernel thread.
• Characteristics:
o If one thread makes a blocking system call, other threads can still run.
o Many systems limit the number of threads to avoid overwhelming system resources.
• Examples:
3. Many-to-Many Model
The Many-to-Many model maps multiple user threads to a smaller or equal number of kernel
threads.
• Characteristics:
o High concurrency: Developers can create numerous user threads without being
constrained by kernel threads.
o If one thread performs a blocking system call, the kernel can schedule another
thread to maintain efficiency.
o Allows a user thread to be explicitly bound to a kernel thread for scenarios where
binding is required.
o Previously used in Solaris (before version 9); Solaris now employs the One-to-One
model.
CPU scheduling is the process of deciding which process will use the CPU at a given time
o Happens during an I/O request or when a process invokes a function like wait() for a
child process to terminate.
2. Running → Ready State
o Happens when an I/O operation completes, and the process becomes ready to
execute.
4. Process Termination
o When a process finishes execution, the CPU must select a new process from the
ready queue, if available.
o It terminates.
3. Preemptive Scheduling
1. CPU Utilization
• Ideal Range:
2. Throughput
• Implication: Higher throughput means the system handles more processes efficiently.
3. Turnaround Time
• Definition: The total time taken for a process to complete from submission to finish.
• Components:
4. Waiting Time
• Key Insight:
o CPU scheduling affects only waiting time, not the time for execution or I/O.
5. Response Time
• Definition: The time from submission of a request to the system until the first response is
generated.
o Response time measures how quickly the system begins processing a request.
Scheduling algorithms
Non preemptive
FCFS
the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue.
When a process enters the ready queue, its PCB is linked onto the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue.
On the negative side, the average waiting time under the FCFS policy is often quite long.
SJF
When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.
The SJF scheduling algorithm is optimal, giving the minimum average waiting time for a given set of
processes.
The real difficulty with the SJF algorithm is knowing the length of the next CPU request.
The processer should know in advance how much time process will take.
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where required CPU time is not known.
SJF scheduling is used frequently in long-term scheduling. it cannot be implemented at the level of
short-term CPU scheduling.
One approach to this problem is to try to predictthe length of next CPU burst based on the length to
the previous ones.. The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
The SJF algorithm can be either preemptive or nonpreemptive.The next CPU burst of the newly
arrived process may be shorter than what is left of the currently executing process. A preemptive SJF
algorithm will preempt the currently executing process and execute the newly arrived process,
whereas a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU
burst. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling
P0 0 5
P1 1 3
P2 2 8
P3 3 6
| P0 | P1 | P3 | P2 |
0 5 8 14 22
Process AT BT CT TAT WT
P0 0 5 5 5 0
P1 1 3 8 7 4
P3 3 6 14 11 5
P2 2 8 22 20 12
Preemptive SJF/SRTF
| P0 | P1 |P0 | P3 | P2 |
0 1 4 8 9 14 22
Arrival Time Burst Time Completion Time Turnaround Time Waiting Time
Process
(AT) (BT) (CT) (TAT) (WT)
P0 0 5 8 8 3
P1 1 3 4 3 0
P2 2 8 22 20 12
P3 3 6 14 11 5
Priority
A priority is associated with each process, and the CPUis allocated to the process with the highest
priority.
An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095.
A major problem with priority scheduling algorithms is indefinite blocking, or starvation. A process
that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling
algorithm can leave some lowpriority processes waiting indefinitely. In a heavily loaded computer
system, a steady stream of higher-priority processes can prevent a low-priority process from ever
getting the CPU.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging involves
gradually increasing the priority of processes that wait in the system for a long time.
Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready
queue, its priority is compared with the priority of the currently running process. A preemptive
priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is
higher than the priority of the currently running process. A nonpreemptive priority scheduling
algorithm will simply put the new process at the head of the ready queue.
Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
P0 0 5 1
P1 1 3 2
P2 2 8 1
Process Arrival Time (AT) Burst Time (BT) Priority
P3 3 6 3
| P0 | P2 | P1 | P3 |
0 5 13 16 22
Arrival Time Burst Time Completion Time Turnaround Time Waiting Time
Process Priority
(AT) (BT) (CT) (TAT) (WT)
P0 0 5 1 5 5 0
P1 1 3 2 16 15 12
P2 2 8 1 13 11 3
P3 3 6 3 22 19 13
P0 0 5 3
P1 1 3 2
P2 2 8 1
P3 3 6 3
| P0 | P1 | P2 | P1 | P0 | P3 |
0 1 2 10 12 16 22
P0 0 5 3 16 16 - 0 = 16 16 - 5 = 11
P1 1 3 2 12 12 - 1 = 11 11 - 3 = 8
P2 2 8 1 10 10 - 2 = 8 8 - 8 = 0
P3 3 6 3 22 22 - 3 = 19 19 - 6 = 13
it is similar to FCFS scheduling, but preemption is added to enable the system to switch between
processes.
A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally
from 10 to 100 milliseconds in length.
The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time
interval of based on time quantum
The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after
specified time quantum, and dispatches the process.
Once the time quantum is up, A context switch will be executed, and the process will be put at the
tail of the ready queue. The CPU scheduler will then select the next process in the ready queue.
Once a process is executed for a given time period, it is preempted and other process executes for
if the time quantum is extremely large, the RR policy is the same as the FCFS policy. In contrast, if the
time quantum is extremely small .the RR approach can result in a large number of context switches
P0 0 5
P1 1 3
P2 2 8
P3 3 6
| P0 | P1 | P2 | P3 | P0 | P1 | P2 | P3 | P0 | P2 | P3 | P2 |
0 2 4 6 8 10 11 13 15 16 18 20 22
Process AT BT CT TAT WT
P0 0 5 16 16 11
P1 1 3 11 10 7
P2 2 8 22 20 12
P3 3 6 20 17 11
Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
A multilevel queue scheduling algorithm partitions the ready queue into several separate queues
The processes are permanently assigned to one queue, generally based on some property of the
process, such as memory size, process priority, or process type
an example of a multilevel queue scheduling algorithm with five queues, listed below in order of
priority:
No process in the batch queue, for example, could run unless the queues for system processes,
interactive processes, and interactive editing processes were all empty. If an interactive editing
process entered the ready queue while a batch process was running, the batch process would be
preempted.