0% found this document useful (0 votes)
37 views

L2 - Process Abstraction

The document discusses process abstraction and how it allows operating systems to manage multiple programs executing concurrently by virtualizing hardware resources. Process abstraction provides separate memory contexts for code, data, and function calls through the use of stack memory. The stack allows functions to allocate space for parameters and local variables dynamically.

Uploaded by

goxobom256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

L2 - Process Abstraction

The document discusses process abstraction and how it allows operating systems to manage multiple programs executing concurrently by virtualizing hardware resources. Process abstraction provides separate memory contexts for code, data, and function calls through the use of stack memory. The stack allows functions to allocate space for parameters and local variables dynamically.

Uploaded by

goxobom256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Process Management

Process Abstraction

Lecture 2
Overview
n Introduction to Process Management
n Process Abstraction:
q Memory Context
n Code & Data
n Function call
n Dynamically allocated memory
q Hardware Context
q OS Context
n Process State
q Process Control Block and Process Table

n OS interaction with Process


Recap: Efficient Hardware Utilization
n OS should provide efficient use of the
hardware resource:
q By managing the programs executing on the
hardware

n Observation:
q If there is only one program executing at any
point in time, how can we utilize hardware
resources effectively?

n Solution:
q Allow multiple programs to share the hardware
n e.g. Multiprogramming, Time-sharing
Introduction to Process Management
n As the OS, to be able to switch from running
program A to program B requires:
1. Information regarding the execution of program
A needs to be stored
2. Program A's information is replaced with the
information required to run program B

n Hence, we need:
q An abstraction to describe a running program
q aka process
Key Topics
Process Abstraction

• Information describing an executing program

Process Scheduling

• Deciding which process get to execute

Inter-Process Communication &


Synchronization
• Passing information between processes

Alternative to Process

• Light-weight process aka Thread


Process Abstraction
n (Process / Task / Job) is a dynamic
abstraction for executing program
q information required to describe a running
program

Memory Hardware OS
Context Context Context
• Code • Registers • Process
• Data • PC Properties
• ... •… • Resources
used
• ...
Recap: C Sample Program and Assembly Code
int i = 0;

i = i + 20;
C Code Fragment

lw $1, 4096 //Assume address of i = 4096


addi $1, $0, 0 //register $1 = 0
sw $1, 4096 //i = 0

lw $2, 4096 //read i


addi $3, $2, 20 //$3 = $2 + 20
sw $3, 4096 //i = i + 20

Corresponding MIPS-like Assembly Code


Recap: Program Execution (Memory)
lw $1, 4096
addi $1, $1, 0
sw $1, 4096 Text
lw $2, 4096
addi $3, $2, 20
(for instructions)
sw $3, 4096

4096 Data
(for global variables)
The entire
memory
space

"Free" Memory
Recap: Generic Computer Organization

Instruction Cache
Fetch Unit

… … …

Dispatch Unit

Memory
Data Cache
INT FP …… MEM
Registers
Functional Units

On CPU Chip Off CPU Chip


Recap: Component Description
n Memory:
q Storage for instruction and data

n Cache:
q Duplicate part of the memory for faster access
q Usually split into instruction cache and data cache

n Fetch unit:
q Loads instruction from memory
q Location indicated by a special register: Program
Counter (PC)
Recap: Component Description (cont)
n Functional units:
q Carry out the instruction execution
q Dedicated to different instruction type

n Registers:
q Internal storage for the fastest access speed
q General Purpose Register (GPR):
n Accessible by user program (i.e. visible to compiler)
q Special Register:
n Program Counter (PC)
n etc
Recap: Basic Instruction Execution
n Instruction X is fetched
q Memory location indicated by Program Counter
n Instruction X dispatched to the corresponding
Functional Unit
q Read operands if applicable
n Usually from memory or GPR
q Result computed
q Write value if applicable
n Usually to memory or GPR
n Instruction X is completed
q PC updated for the next instruction
Recap: What you should know J
n An executable (binary) consists of two major
components:
q Instructions and Data

n When a program is under execution, there


are more information:
q Memory context:
n Text and Data, …
q Hardware context:
n General purpose registers, Program Counter, …

n Actually, there are other types of memory


usage during program execution
q Coming up next
Memory Context
Function Call

What if f() calls u() calls n()?


Function Call : Challenges
int g(int i, int j)
{
int i = 0; int a;
VS
i = i + 20; a = i + j
C Code Fragment return a;
}
C Code with Function

n Consider:
q How do we allocate memory space for variables
i, j and a?
n Can we just make use of the "data" memory space?

q What are the key issues?


Function Call : Control Flow and Data
n f() calls g()
q f() is the caller void f(int a, int b)
{
q g() is the callee int c;
c = g(a, b); 1
....
n Important Steps: } 2
5
1. Setup the parameters
2. Transfer control to callee int g(int i, int j)
{
3. Setup local variable int a; 3
4. Store result if applicable
5. Return to caller ......
return ...; 4
}
Function Call : Control Flow and Data
n Control Flow Issues:
q Need to jump to the function body
q Need to resume when the function call is done
è Minimally, need to store the PC of the caller

n Data Storage Issues:


q Need to pass parameters to the function
q Need to capture the return result
q May have local variables declaration

è Need a new region of memory that dynamically


used by function invocations
Introducing Stack Memory
n Stack Memory Region:
q The new memory region to store information
function invocation

n Information of a function invocation is


described by a stack frame

n Stack frame contains:


q Return address of the caller
q Arguments (Parameters) for the function
q Storage for local variables
q Other information…. (more later)
Stack Pointer
n The top of stack region (first unused location)
is logically indicated by a Stack Pointer:

q Most CPU has a specialized register for this


purpose

q Stack frame is added on top when a function is


invoked
n Stack “grows”

q Stack frame is removed from top when a function


call ends
n Stack "shrinks"
Illustration: Stack Memory
Text
(for instructions)

Data
(for global variables)
The entire
memory
space Stack growth
direction

Stack
Pointer
Stack
(for function invocations)

n The memory layout on some systems is flipped, i.e.


stack on top, text on the bottom
Illustration: Stack Memory Usage (1 / 5)
void f()
{
At this
... point
g();
...
}

void g()
{
h();
...
} Stack Frame
void h() for f()
{ …
... …
}
Illustration: Stack Memory Usage (2 / 5)
void f()
{
...
g();
...
}

void g()
{ At this Stack Frame
h(); point
for g()
...
} Stack Frame
void h() for f()
{ …
... …
}
Illustration: Stack Memory Usage (3 / 5)
void f()
{
...
g();
...
} Stack Frame
for h()
void g()
{ Stack Frame
h(); for g()
...
} Stack Frame
void h() for f()
{ At this
point

... …
}
Illustration: Stack Memory Usage (4 / 5)
void f()
{
...
g();
...
}

void g()
{ Stack Frame
h(); for g()
... At this
} point
Stack Frame
void h() for f()
{ …
... …
}
Illustration: Stack Memory Usage (5 / 5)
void f()
{
...
g();
... At this
point
}

void g()
{
h();
...
} Stack Frame
void h() for f()
{ …
... …
}
Illustration: Stack Frame v1.0
Free
Memory
Stack
Pointer
Local Variables
Parameters
Return PC Stack Frame
Stack

for g()

Other info

Stack Frame
After function f() calls g()
for f()
Topmost stack frame
… belongs to g()

Function Call Convention
n Different ways to setup stack frame:
q Known as function call convention
q Main differences:
n What information is stored in stack frame or registers?
n Which portion of stack frame is prepared by caller /
callee?
n Which portion of stack fame is cleared by caller / callee?
n Who between caller / callee to adjust the stack pointer?

n No universal way
q Hardware and programming language dependent

n An example scheme is described next


Stack Frame Setup Local Variable
Parameters
Saved SP
Return PC

n Prepare to make a function call:


q Caller: Pass parameters with registers and/or stack

q Caller: Save Return PC on stack

q Transfer Control from Caller to Callee

q Callee: Save the old Stack Pointer (SP)

q Callee: Allocate space for local variables of callee on stack

q Callee: Adjust SP to point to new stack top


Illustration: Calling function g()
void f(int a, int b)
{
int c; New SP

local var "a"


a = 123;
b = 456; 123
Parameters
c = g(a, b);
456
....
} Saved SP
Return PC Old SP
int g(int i, int j)
{ Stack Frame
int a;
for f()
a = i + j
return a * 2;
}
Stack Frame Teardown Return Result
Local Variable
Parameters
Saved SP
Return PC

n On returning from function call:


q Callee: Place return result on stack (if applicable)
q Callee: Restore saved Stack Pointer

q Transfer control back to caller using saved PC

q Caller: Utilize return result (if applicable)

q Caller: Continues execution in caller


Illustration: Function g()finishes
void f(int a, int b)
{
return result
int c; 1158
local var "a"
579
a = 123;
b = 456; 123
Parameters
c = g(a, b);
Execution 456
....
resumes here
} Saved SP
Return PC Restored SP
int g(int i, int j)
{ Stack Frame
int a;
for f()
a = i + j
return a * 2;
}
Other Information in Stack Frame
n We have described the basic idea of:
q Stack frame
q Calling Convention: Setup and Teardown

n Let us look at a few common additional


information in the stack frame:
q Frame Pointer
q Saved Registers
Frame Pointer
n To facilitate the access of various stack frame
items:
q Stack Pointer is hard to use as it can change
è Some processors provide a dedicated register
Frame Pointer

n The frame pointer points to a fixed location in


a stack frame
q Other items are accessed as a displacement from
the frame pointer

n The usage of FP is platform dependent


Saved Registers
n The number of general purpose register (GPR)
on most processors are very limited:
q E.g. MIPS has 32 GPRs, x86 has 16 GPRs

n When GPRs are exhausted:


q Use memory to temporary hold the GPR value
q That GPR can then be reused for other purpose
q The GPR value can be restored afterwards
q known as register spilling

n Similarly, a function can spill the registers it


intend to use before the function starts
q Restore those registers at the end of function
Illustration: Stack Frame v2.0
Free
Memory
Stack
Pointer
Local Variables
Parameters
Saved Registers Frame Stack Frame
Stack

Saved SP Pointer for g()

Saved FP
Return PC

Stack Frame
After function f() calls g()
for f()
Topmost stack frame
… belongs to g()

Stack Frame Setup / Teardown [Updated]
n On executing function call:
q Caller: Pass arguments with registers and/or stack
q Caller: Save Return PC on stack
q Transfer control from caller to callee
q Callee: Save registers used by callee. Save old FP, SP
q Callee: Allocate space for local variables of callee on stack
q Callee: Adjust SP to point to new stack top

n On returning from function call:


q Callee: Restore saved registers, FP, SP
q Transfer control from callee to caller using saved PC
q Caller: Continues execution in caller

n Remember, just an example!


Function Call Summary
n In this part, we learned:
q Another portion of memory space is used as a
Stack Memory

q Stack Memory stores the executing function using


Stack Frame
n Typical information stored on a stack frame
n Typical scheme of setting up and tearing down a stack
frame

q The usage of Stack Pointer and Frame Pointer


Memory Context
Dynamically Allocated Memory

Hmm… I need more memory


Dynamically Allocated Memory
n Most programming languages allow
dynamically allocated memory:
q i.e. acquire memory space during execution time

n Examples:
q In C, the malloc() function call
q In C++, the new keyword
q In Java, the new keyword

n Question:
q Can we use the existing "Data" or "Stack" memory
regions?
Dynamically Allocated Memory
n Observations:
q These memory blocks have different behaviors:
1. Allocated only at runtime, i.e. size is not known
during compilation time è Cannot place in Data
region
2. No definite deallocation timing, e.g. can be
explicitly freed by programmer in C/C++, can be
implicitly freed by garbage collector in Java è
Cannot place in Stack region

n Solution:
q Setup a separate heap memory region
Illustration for Heap Memory
Text
(for instructions)

Data
(for global variables)
The entire
memory Heap
space (for dynamic allocation)

Stack
(for function invocations)
Managing Heap Memory
n Heap memory is a lot trickier to manage due
to its nature:
q Variable size
q Variable allocation / deallocation timing
n You can easily construct a scenario where
heap memory are allocated /deallocated in
such a way to create "holes" in the memory
q Free memory block squeezed in between of
occupied memory block
n We will learn more in the memory
management (much) later in the course
Checkpoint: Contexts updated
n Information describing a process:

q Memory context:
n Text, Data, Stack and Heap

q Hardware context:
n General purpose registers, Program Counter, Stack
pointer, Stack frame pointer, ….
OS Context
Process Id & Process State

Your ID? Give me a status report!


Process Identification
n To distinguish processes from each other
q Common approach is to use process ID (PID)
n Just a number
q Unique among processes

n There are a couple of OS dependent issues:


q Are PIDs reused?
q Does it limit the maximum no. of processes?
q Are there reserved PIDs?
Introducing Process State
n With the multi-process scenario:
q A process can be:
n Running OR
n Not-running, eg. another process running

n A process can be ready to run


q But not actually executing
q E.g. waiting for its turn to use the CPU

n Hence, each process should have a process


state:
q As an indication of the execution status
(Simple) Process Model State Diagram
Context
switch

Ready Running
Ready State: Running State:
process waiting to run process is executing
Context
switch

n The set of states and transitions are known


as process model
q Describes the behaviors of a process
Generic 5-State Process Model

New Terminated
admit
switch: scheduled exit
create
Ready Running
switch:
release CPU

event occurs event wait

Blocked

Notes: generic process states, details vary in actual OS


Process States for 5-Stage Model
n New:
q New process created
q May still be under initialization è not yet ready
n Ready:
q process is waiting to run
n Running:
q Process being executed on CPU
n Blocked:
q Process waiting (sleeping) for event
q Cannot execute until event is available
n Terminated:
q Process has finished execution, may require OS cleanup
Process State Transitions in 5-Stage Model
n Create (nil → New):
q New process is created

n Admit (New → Ready):


q Process ready to be scheduled for running

n Switch (Ready → Running):


q Process selected to run

n Switch (Running → Ready):


q Process gives up CPU voluntarily or preempted
by scheduler
Process State Transitions
n Event wait (Running → Blocked):
q Process requests event/resource/service which is
not available/in progress
q Example events:
n System call, waiting for I/O, (more later)

n Event occurs (Blocked → Ready):


q Event occurs è process can continue
Global View of Process States
n Given n processes:
q With 1 CPU:
n £ 1 process in running state
n conceptually 1 transition at a time
q With m CPUs:
n £ m process in running state
n possibly parallel transitions

n Different processes may be in different states


q each process may be in different part of its state
diagram
Queuing Model of 5 state transition
ready
queue
admit switch exit
Running
by scheduler Process

release cpu

event wait

blocked

...
queue

Notes:
•More than 1 process can be in ready + blocked queues
•May have separate event queues
•Queuing model gives global view of the processes, i.e. how the OS views them
Checkpoint: Contexts updated
n When a program is under execution, there
are more information:
q Memory context:
n Text and Data, Stack and Heap

q Hardware context:
n General purpose registers, Program Counter, Stack
pointer, Stack frame pointer, …

q OS context:
n Process ID, Process State, …
Process Table &
Process Control Block

Putting it together
Process Control Block & Table
n The entire execution context for a process
q Traditionally called Process Control Block (PCB) or
Process Table Entry

n Kernel maintains PCB for all processes


q Conceptually stored as one table representing all
processes

Interesting Issues:
n Scalability
q How many concurrent processes can you have?
n Efficiency
q Should provide efficient access with minimum space
wastage
Illustration of a Process Table
PC, FP,
SP, ……
GPRs Text

PCB1 Memory
Region
Data
PCB2 Info
PID
PCB3
Process Heap
……… State
Process
Process
Control Block
Table
Stack

Memory Space
of a Process
Process interaction with OS
System Calls

Can you please do this for me?


System Calls
n Application Program Interface (API) to OS
q Provides way of calling facilities/services in kernel
q NOT the same as normal function call
n have to change from user mode to kernel mode

n Different OS have different APIs:


q Unix Variants:
n Most follows POSIX standards
n Small number of calls: ~100
q Windows Family:
n Uses Win API across different Windows versions
n New version of windows usually adds more calls
n Huge number of calls:~1000
Unix System Calls in C/C++ program
n In C/C++ program, system call can be
invoked almost directly
q Majority of the system calls have a library version
with the same name and the same parameters
n The library version act as a function wrapper

q Other than that, a few library functions present a


more user friendly version to the programmer
n E.g. lesser number of parameters, more flexible
parameter values etc
n The library version acts as a function adapter
Example
#include <unistd.h>
#include <stdio.h>

int main()
{ Library call that
int pid; has the same
name as a
/* get Process ID */ system call
pid = getpid();

printf("process id = %d\n", pid);

return 0;
} Library call that
make a system
n System Calls invoked in this example:
call
q getpid()

q write() – made by printf() library call


General System Call Mechanism
1. User program invokes the library call
n Using the normal function call mechanism as
discussed
2. Library call (usually in assembly code)
places the system call number in a
designated location
n E.g. Register
3. Library call executes a special instruction to
switch from user mode to kernel mode
n That instruction is commonly known as TRAP
General System Call Mechanism (cont)
4. Now in kernel mode, the appropriate system
call handler is determined:
n Using the system call number as index
n This step is usually handled by a dispatcher
5. System call handler is executed:
n Carry out the actual request
6. System call handler ended:
n Control return to the library call
n Switch from kernel mode to user mode
7. Library call return to the user program:
n via normal function return mechanism
Illustration: System Call Mechanism
Dispatcher()
int getpid() {
{
Call Handlers[SysNum];
2 Setup Sys.Call No
TRAP 3 }
...
1 return ... 4
}
6
...
int main() 7

{ SystemCallHandlerXXX()
... {
getpid(); Perform the task 5
... }
}
User Mode Kernel Mode
Process interaction with OS
Exception and Interrupt

Ops!
Exception
n Executing a machine level instruction can cause
exception
n For example:
q Arithmetic Errors
n Overflow, Underflow, Division by Zero
q Memory Accessing Errors
n Illegal memory address, Mis-aligned memory access
q Etc
n Exception is Synchronous
q occur due to program execution
n Effect of exception:
q Have to execute a exception handler
q Similar to a forced function call
Interrupt
n External events can interrupt the execution of
a program
n Usually hardware related, e.g.:
q Timer, Mouse Movement, Keyboard Pressed etc
n Interrupt is asynchronous
q Events that occurs independent of program
execution
n Effect of interrupt:
q Program execution is suspended
q Have to execute an interrupt handler
Exception/Interrupt Handler: Illustration
void f()
1. Exception/Interrupt
{ occurs:
... n Control transfer to a
Statement S1; handler routine
... 1
automatically
}

void handler() 2. Return from handler


{
1. Save Register/CPU state
routine:
2. Perform the handler routine n Program execution
3. Restore Register/CPU resume
2 4. Return from interrupt
} n May behave as if
nothing happened
Summary
n Using process as an abstraction of running
program:
q Necessary information (environment) of execution
q Memory, Hardware and OS contexts

n Process from OS perspective:


q PCB and process table

n OS ç è Process interactions
q System calls
q Exception / Interrupt
References
n Modern Operating System (3rd Edition)
q Section 2.1

n Operating System Concepts (8th Edition)


q Section 3.1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy