0% found this document useful (0 votes)
110 views

Rtos PDF

Real-time operating systems are designed to support real-time applications where tasks must meet deadlines. They provide predictable task scheduling and interrupt handling. Key features include time-aware scheduling algorithms, bounded execution times for system calls, and mechanisms to prevent priority inversions during synchronization. RTOS performance is measured by interrupt latency, scheduling latency, and context switch times.

Uploaded by

yogendra.sheth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Rtos PDF

Real-time operating systems are designed to support real-time applications where tasks must meet deadlines. They provide predictable task scheduling and interrupt handling. Key features include time-aware scheduling algorithms, bounded execution times for system calls, and mechanisms to prevent priority inversions during synchronization. RTOS performance is measured by interrupt latency, scheduling latency, and context switch times.

Uploaded by

yogendra.sheth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Real-Time

Operating Systems

Summary
¿ Introduction
¿ Basic concepts
¿ RT Scheduling
¿ Aperiodic task scheduling
¿ Periodic task scheduling

¿ Embedded RTOS

¿ Source:
G.Buttazzo, “Hard Real-Time Computing Systems –
Predictable Scheduling Algorithms and Applications“,
Kluwer Academic Publishers.

1
Introduction
¿ Real-time system:
“A real-time system is a computer system in which the
correctness of the system behavior depends not only on
the logical results of the computation, but also on the
physical instant at which these results are produced”

“A real-time system is a system that is required to


react to stimuli from the environment (including the
passage of physical time) within time intervals
dictated by the environment”

Introduction (2)
¿ Examples of real-time systems:
¿ plant control
¿ control of production processes / industrial automation
¿ railway switching systems
¿ automotive applications
¿ flight control systems
¿ environmental acquisition and monitoring
¿ telecommunication systems
¿ robotics
¿ military systems
¿ space missions
¿ household appliances
¿ virtual / augmented reality

2
Introduction (3)
¿ Real-time
¿ Time: main difference to other classes of computation
¿ Real: reaction to external events must occur during
their evolution.
¿ System time (internal time) has to be measured
with same time scale as controlled environment
(external time)
¿ Real time does not mean fast but predictable
¿ Concept of deadline:
¿ Time after which a computation is not just late,
but also wrong

Hard vs. real time


¿ Hard RT task:
¿ if missing its deadline may cause catastrophic
consequences on the environment under control
¿ Soft RT task:
¿ if meeting its deadline is desirable (e.g. for performance
reasons) but missing does not cause serious damage
¿ RTOS that is able to handle hard RT tasks is called
hard real-time OS

3
Hard vs. real time (2)
¿ Typical hard real time activities
¿ sensory data acquisition
¿ detection of critical conditions
¿ actuator servoing
¿ low-level control of critical system components

¿ Areas of application:
¿ Automotive:
• power-train control, air-bag control,
steer by wire, brake by wire
¿ Aircraft:
• engine control, aerodynamic control

Hard vs. real time (3)


¿ Typical soft real time activities:
¿ command interpreter of user interface
¿ keyboard handling
¿ displaying messages on screen
¿ transmitting streaming data

¿ Areas of application:
¿ Communication systems:
• voice over IP, cellular telephony
• user interaction
• comfort electronics (body electronics in cars)

4
RTOS

Real-time Operating Systems


¿ RT systems require specific support from OS
¿ Conventional OS kernels are inadequate w.r.t.
RT requirements:
¿ Multitasking/scheduling
• provided through system calls
• does not take time into account (introduce unbounded delays)
¿ Interrupt management
• achieved by setting interrupt priority > than process priority
• increase system reactivity but may cause unbounded delays on
process execution even due to unimportant interrupts
¿ Basic IPC and synchronization primitives
• may cause priority inversion (high priority task blocked by a low
priority task)
¿ No concept of RT clock
10

5
Real-time Operating Systems (2)
¿ Desirable features of a RTOS:
¿ Timeliness
• OS has to provide mechanisms for:
• time management
• handling tasks with explicit time constraints
¿ Predictability
¿ Fault tolerance
¿ Design for peak load
¿ Maintainability

11

RTOS evaluation metrics


¿ Interrupt latency (Til):
¿ the time from the start of the physical
interrupt to the execution of the first
instruction of the interrupt service routine
¿ Scheduling latency
(interrupt dispatch latency) (Tsl):
¿ the time from the execution of the last
instruction of the interrupt handler to the
first instruction of the task made ready by
that interrupt
¿ Context-switch time (Tcs):
¿ the time from the execution of the last
instruction of one user-level process to
the first instruction of the next user-level
process
¿ Maximum system call time:
¿ should be predictable & independent of
the # of objects in the system

12

6
Real-time Operating Systems (3)
¿ Timeliness
¿ Achieved through proper scheduling algorithms
• Core of an RTOS!
¿ Predictability
¿ Affected by several issues:
• I/O & interrupts
• Synchronization & IPC
• Architecture
• Memory management
• Applications
• Scheduling!

13

Achieving predictability: interrupts


¿ One of the biggest problem for predictability
¿ Typical device driver:
<enable device interrupt>
<wait for interrupt>
<transfer data>
¿ In most OS:
• interrupts served with respect to fixed priority scheme
• interrupts have higher priorities than processes
Æ problem in real-time systems:
• processes may be of higher importance than I/0 operation!

14

7
Achieving predictability: interrupts
¿ First solution: Disable all interrupts but timer
interrupts
¿ all peripheral devices have to be handled by
application tasks
¿ data transfer by polling
¿ great flexibility, data transfers can be estimated
precisely
¿ no change of kernel needed when adding devices

¿ Problems:
¿ degradation of processor performance (busy wait)
¿ no encapsulation of low level details

15

Achieving predictability: interrupts


¿ Second solution: Disable all interrupts but timer
interrupts, plus handle devices by special, timer-
activated kernel routines
¿ unbounded delays due to interrupt driver eliminated
¿ periodic device routines can be estimated in advance
¿ hardware details encapsulated in dedicated routines

¿ Problems
¿ degradation of processor performance (still busy waiting
- within I/0 routines)
¿ more inter-process communication than first solution
¿ kernel has to be modified when adding devices

16

8
Achieving predictability: interrupts
¿ Third solution: Enable external interrupts and
reduce the drivers to the least possible size
¿ driver only activates proper task to take care of device
¿ The task executes under direct control of OS, just like
any other task
¿ control tasks than have higher priority than device task

17

Achieving predictability: interrupts


¿ Advantages
¿ busy wait eliminated
¿ unbounded delays due to unexpected device handling
dramatically reduced (not eliminated !)
¿ remaining unbounded overhead may be estimated
relatively precisely
¿ State of the art !

18

9
Achieving predictability:
system calls & IPC
¿ All kernel calls have to be characterized by
bounded execution time
¿ each kernel primitive should be preemptable
¿ Usual semaphore mechanism not suited for real
time applications:
¿ Priority inversion problem (high priority task is blocked
by low priority task)
¿ Solution: use special mechanisms:
• Basic Priority Inheritance protocol
• Priority ceiling

19

Priority inversion (1)


¿ Typical characterization of priority inversion:
¿ A medium-priority task preempts a lower-priority task
that is using a shared resource on which a higher-
priority task is pending.
¿ If the higher-priority task is otherwise ready to run, but
a medium-priority task is currently running instead, a
priority inversion is said to occur.
¿ Example:
H
M
L

20

10
Priority inversion (2)
¿ Basic priority inheritance protocol:
1. A job J uses its assigned priority, unless it is in its CS
and blocks higher priority jobs
In which case, J inherits PH , the highest priority of the
jobs blocked by J
When J exits the CS, it resumes the priority it had at
the point of entry into the CS
2. Priority inheritance is transitive
¿ Advantage:
¿ Transparent to scheduler
¿ Disadvantage:
¿ Deadlock possible
21

Priority inversion (3)


¿ Priority ceiling protocol
1. A priority ceiling is assigned to each resource, which is
equal to the highest priority task that may use the
resource
2. The scheduler transfers that priority to any task that
accesses the resource
3. Job J is allowed to start a new CS only if its priority is
higher than all priority ceilings of all the semaphores
locked by jobs other than J
4. On completion, J’s priority switches to its normal value
¿ Advantage:
¿ tasks can share resources simply by changing their
priorities, thus eliminating the need for semaphores

22

11
Priority inheritance
and priority ceiling
¿ Inheritance

¿ Ceiling

23

Achieving predictability:
architecture
¿ Obvious source for unpredictability:
¿ Memory accesses not deterministic
¿ Example:
• hit ratio = 90 %, Tcache= 10ns, Tmem= 50ns
• => 10 % of accesses have access time 18% larger than
cacheless case
¿ Solutions:
¿ systems without cache (or disabled)
¿ precise estimation of caching behaviour
• Systems with application-specific buffers

24

12
Achieving predictability:
memory management

¿ Avoid non-deterministic delays


¿ No conventional demand paging (page fault handling!)
• May use selecitve page locking to increase determinism
¿ Typically used:
¿ Memory segmentation
¿ Static partitioning

¿ Problems:
¿ flexibility reduced
¿ careful balancing required

25

Achieving predictability:
Applications
¿ Current programming languages not expressive
enough to prescribe precise timing
¿ Need of specific RT languages
¿ Desirable features:
¿ no dynamic data structures
¿ no recursion
¿ only time-bound loops

26

13
What RTOS?
¿ Proprietary:
¿ VxWorks by WindRiver
¿ LynxOS by Lynx

¿ Free/Academical/Open-source:
¿
¿ QNX
¿ RTLinux
¿ Spring
¿ RTX
¿…

27

Real-time
Process Management
& Scheduling

28

14
Processes
¿ Called tasks in the RT community
¿ Basic concepts:
¿ Task scheduling
¿ Scheduling problems & anomalies

29

Scheduling: preliminaries
¿ Key fact:
Any RT scheduling policy must be preemptive:
¿ Tasks performing exception handling may need to
preempt running tasks to ensure timely reaction
¿ Tasks may have different levels of criticalness. This can
be mapped to a preemption scheme
¿ More efficient schedules can be produced with
preemption

30

15
Scheduling: definition
¿ Given a set of tasks J = {J1 , ...Jn } a schedule is
an assignment of tasks to the processor so that
each task is executed until completion.
¿ Formally:
¿ A schedule is a function σ : R+ → N such that:
¿ ∀t ∈ R +, ∃ t1, t2 ∈ R +, ∀t∈ [t1,t2) σ(t) = σ(t‘)
¿ In practice, σ is an integer step function
σ(t) = k means task Jk is executing at time t,
σ(t) = 0 means CPU is idle
¿ Each interval [ti,ti+1) with σ(t) constant for t ∈ [ti,ti+1) is
called a time slice

31

Scheduling: example
¿ A schedule is called feasible if all tasks can be
completed according to a set of specified
constraints
¿ A set of tasks is called schedulable if there exist at
least one algorithm that can produce a feasible
schedule
¿ Example:

32

16
Scheduling constraints
¿ The following types of constraints are considered:
¿ Timing constraints
• meet your deadline
¿ Precedence constraints
• respect prerequisites
¿ Resource constraints
• access only available resources

33

Timing constraints
¿ Real-time systems are characterized mostly by
timing constraints
¿ Typical timing constraint: deadline
¿ Deadline missing separates two classes of RT
systems:
¿ Hard : missing of deadline can cause catastrophic
consequences
¿ Soft : missing of deadline decreases performance of
system

34

17
Task characterization
¿ Arrival time ai :
¿ the time Ji becomes ready for execution
¿ Also called release time ri
¿ Computation time Ci:
¿ time necessary for execution without interruption
¿ Deadline di :
¿ time before which task has to complete its execution
¿ Start time Si :
¿ time at which Ji start its execution
¿ Finishing time fi:
¿ time at which Ji finishes its execution

Ji Ci

ai Si fi di t
35

Task characterization (2)


¿ Lateness Li:
¿ L i = fi - di
¿ delay of task completion with respect to di

¿ Tardiness Ei:
¿ Ei = max (0, Li)
¿ time a task stays active after its deadline

¿ Laxity or slack time Xi:


¿ X i = d i - ai
¿ maximum time a task can be delayed on first activation
to complete before its deadline

36

18
Periodic and aperiodic tasks
¿ Periodic task τi consists of infinite sequence of
identical activities, called instances or job
¿ regularly activated at a constant rate
¿ Activation of first instance of τ is called ϕi
¿ Ti = period of the task
¿ Each task τi can be characterized by Ci , Ti, , Di
• Ci , Ti, , Di constant for each instance
• In most cases: Ci =Di
¿ Aperiodic task Ji consists of infinite sequence of
identical activities (instances)
¿ Their activations are not regular

37

Periodic and aperiodic tasks:


example

38

19
Precedence constraints
¿ Task have often to respect some precedence
relations
¿ Described by a precedence graph G:
• Nodes N(G): tasks
• Edges E(G): precedence relations
¿ G induces partial order on task set.

39

Precedence constraints: example


¿ System for recognizing object on a
conveyer belt through two cameras
¿ Tasks
¿ For each camera:
• image acquisition acq1 and acq2
• low level image processing edge1 and
edge2
¿ Task shape to extract two-dimensional
features from object contoures.
¿ Task disp to compute pixel disparities
from the two images:
¿ Task that calculates object height from
results of disp: H
¿ Task that performs final recognition
based on H and shape: rec

40

20
Real-time scheduling

41

Scheduling: formulation
¿ Given:
¿ a set of n tasks J = {J1 , ..., Jn}
¿ a set of m processor P = {P1 , ..., Pm}
¿ a set of s resources R = {R1 , ..., Rs}
¿ Precedences specified using a precedence graph
¿ Timing constraints may be associated to each task.

¿ Scheduling means to assign processors from P and


resources from R to tasks from J in order to
complete all tasks under the imposed constraints.
= NP-complete!

42

21
Scheduling: classification
¿ Preemptive/non-preemptive
¿ Static
¿ scheduling decisions based on fixed parameters
¿ Dynamic
¿ scheduling decisions based on parameters that change
during system evolution
¿ Off-line
¿ scheduling algorithm is preformed on the entire task set
before start of system
¿ On-line
¿ scheduling decisions are taken at run-time every time a
task enter or leaves the system.

43

Scheduling: guarantee-based
algorithms
¿ Hard RT systems:
¿ feasibility of schedule has to be guaranteed in
advance
¿ Solutions:
¿ Static RT systems:
• Schedule all task activations can be pre-calculated off-line
• Entire schedule can be stored in table
¿ Dynamic RT systems:
• Activation of new tasks subject to acceptance test:
• J=current task set, previously guaranteed
• Jnew: newly arriving task
• Jnew: accepted iff task set J’= J Ç Jnew is schedulable
• Guarantee mechanism based on worst case assumptions Æ
pessimistic (task could unnecessarily rejected).
44

22
Scheduling metrics
Ci

n tasks
Ji
¿
¿ Average response time ai Si fi di
t

tr = Σi=1…N (fi –ai)/n


¿ Total completion time

tc = maxi=1…N (fi) – maxi=1…N (ai)


¿ Weighted sum of completion times

tr = Σi=1…N wi fi
¿ Maximum lateness
Lmax = maxi=1…N (fi - di)
¿ Maximum # of late tasks

Nlate = Σi=1…N missi


missi = 0 if fi £ di, , 1 otherwise
45

Scheduling metrics
¿ Average response time/total completion time not
appropriate for hard real time tasks
¿ Maximum lateness: useful for “exploration”
¿ Max # of late task more significant
¿ Often conflicting:

Minimizing maximum
lateness does not
minimize number
of tasks that miss
their deadlines:

46

23
Aperiodic task scheduling

47

Aperiodic task scheduling


¿ Classification [Graham]:
¿ Triple (α,β,χ)
α = the environment on which the task set has to be scheduled
(typically # of processors)
β = tasks and resource characteristics (preemptive, synchronous
activations etc.)
χ = cost function to be optimized
¿ Examples:
• 1 | prec | L max
• uniprocessor machine
• task set with precedence constraints
• minimize maximum lateness
• 2 | sync | Σi Latei
• uniprocessor machine
• task set with precedence constraints
• minimize # of late tasks

48

24
Jackson’s algorithm
[1955]
¿ 1 | sync | Lmax
¿ No other constraints are considered:
¿ tasks are independent
¿ no precedence relations

¿ Task set J = {Ji (Ci, Di) | i = 1…n}


¿ Computation time Ci
¿ Deadline Di
¿ Principle: Earliest Due Date (EDD)

49

Jackson’s algorithm (2)


¿ It can be proved that:
¿ Given a set of n independent tasks, any algorithm that
executes the tasks in order of nondecreasing deadlines
is optimal with respect to minimize the maximum
lateness.
¿ Complexity: sorting n values (O(nlogn))
¿ EDD can not guarantee feasible schedule.
It only guarantees that if a feasible schedule
exists it will find it

50

25
Jackson’s algorithm (3)
¿ Example of feasible schedule:

¿ Example of unfeasible schedule:

51

Horn’s algorithm
¿ 1 | preem| Lmax
¿ Principle: Earliest Deadline First (EDF)
¿ It can be proved that:
¿ Given a set of n independent tasks with arbitrary arrival
times, any algorithm that at any instant executes the
task with the earliest absolute deadline among all the
ready tasks is optimal with respect to minimizing the
maximum lateness.
¿ Complexity:
¿ per task: inserting a newly arriving task into an ordered
list properly:
¿ n tasks => total complexity O(n2)

¿ Non preemptive EDF is not optimal!


52

26
Horn’s algorithm
¿ Example:

53

Scheduling with precedence


constraints
¿ In General: NP-hard problem
¿ For special cases polynomial time algorithms possible
¿ Two schemes:
¿ Latest Deadline First (LDF)
¿ Modified EDF

54

27
Aperiod task scheduling:
summary

55

Periodic task scheduling

56

28
Introduction
¿ Periodic activities represent the major
computational demand in many applications
¿ sensory data acquisition
¿ control loops
¿ system monitoring

¿ Usually several periodic tasks running concurrently

57

Assumptions
1. Instances of a task are regularly activated at constant
rate. Interval between two consecutive activations =
period of task.
2. All instances of a task have the same Ci
3. All instances of a task have the same deadline Di,
and Di = Ti (deadline = period)
4. All periodic tasks are independent
(i.e. no precedence relations, no resource constraints )
5. No task can suspend itself (e.g. for I/0)
6. All tasks are released as soon as they arrive
7. All overheads due to the RTOS are assumed to be zero

1,2: limited restrictions


3,4: can be tight (3 removed later)
58

29
Characterization of periodic tasks
¿ A periodic task τ can be characterized
(see 1-4) by:
¿ (phase φi)
¿ period Ti
¿ worst case computation time Ci
¿ Additional parameters:
¿ Response time Ri = fi – ri
¿ Release jitter of a task
• maximum deviation of the start time of two-consecutive instances:
RRJi= maxk | (si,k - ri,k ) - (si,k-1 - ri,k-1 ) |
¿ Finish jitter of a task
• maximum deviation of the finish time of two-consecutive instances:
RRJi= maxk | (fi,k - ri,k ) - (fi,k-1 - ri,k-1 ) |
¿ Critical instant (of a task):
• Release time of a task instance resulting in the largest response time

59

Processor utilization factor


¿ Given a set G of a periodic tasks the utilization factor U is
the fraction of processor time spent in the execution of the
task set.
¿ U determines the load of the CPU
¿ Ci/Ti is the fraction of processor time spent in executing
task τi
U = Σi=1…n Ci/Ti
¿ U can be improved by
¿ Increasing computation times of the tasks
¿ Decreasing the periods of the tasks
up to a maximum value below which Γ is schedulable
¿ Limit depends on
¿ task set (particular relations among task’s periods)
¿ algorithm used to schedule the tasks
60

30
Processor utilization factor (2)
¿ Upper bound of U, Uub(Γ,A):
¿ Value of U (for a given task set and scheduling
algorithm) for which the processor is fully utilized
¿ Γ is schedulable using A but any increase of
computation time in one of the tasks makes set
infeasible
¿ Least upper bound of U, Ulub:
¿ Minimum of Uub over all task sets for a given algorithm

U lub ( A) = min(U ub (Γ, A)), ∀Γ.

¿ Ulub allows to easy test for schedulability

61

Processor utilization factor (3)


¿ Schedulability test:
¿ Ulub(A) <= Ulub Æ Γ schedulable.
¿ Ulub(A) > Ulub Æ Γ may be schedulable, if the periods of
the tasks are suitable related.
¿ Ulub(A) > 1 Æ Γ not schedulable.
U ub1
Γ1
Γ2 U ub2

Γ3 U ub3
U
Γ4
U ub4
U ubm
Γm
0 U lub 1
YES ? NO
62

31
Scheduling of periodic tasks
¿ Static scheduling
¿ Dynamic (process-based) scheduling

63

Static scheduling
¿ With a fixed set of purely periodic tasks it is
possible to layout a schedule such that the
repeated execution of this schedule will cause all
processes to run at their correct rate.
¿ Essentially a table of procedure calls, where each
procedure represents part of a code for a
“process.”
¿ Cyclic executive approach

64

32
Cyclic executive approach
¿ Schedule is essentially a table of procedure call
¿ Tasks are mapped onto a set of minor cycles
¿ The set of minor cycles constitute a major cycle (the
complete schedule)
¿ Typically:
¿ Major cycle M = max. period among task set
¿ Minor cycle m = min. period among task set
¿ Must be: M = k·m

65

Cyclic executive approach: example


¿ Task set
Process period,T Computation Time. C
----------------------------------------------------------------
A 25 10
B 25 8
C 50 5
D 50 4
E 100 2

¿ Schedule:
Major cycle
(100)
Minor cycle Minor cycle Minor cycle Minor cycle
(25) (25) (25) (25)

A B C A B D E A B C A B D

23 24 23 22
66

33
Cyclic executive approach: example
¿ Actual code that
implements the above
loop
wait_for_interrupt
cyclic executive schedule Procedure_For_A
Procedure_For_B
Procedure_For_C
wait_for_interrupt
Procedure_For_A
Procedure_For_B
Procedure_For_D
Procedure_For_E
wait_for_interrupt
Procedure_For_A
Procedure_For_B
Procedure_For_C
wait_for_interrupt
Procedure_For_A
Procedure_For_B
Procedure_For_D
end loop

67

Cyclic executive approach


¿ Advantages:
¿ No actual process exists at run-time;
each minor cycle is just a sequence of procedure calls.
¿ The procedures share common address space and can
pass data between themselves:
• No need for data protection, no concurrency
¿ Disadvantages:
¿ All task periods must be multiple of minor cycle time
¿ Difficulty of incorporating sporadic processese or
processes with long periods (major cycle time)
¿ Difficult to actually construct CE (equivalent to bin
packing)
68

34
Process-based scheduling
¿ Fixed-priority scheduling
¿ Rate-monotonic (RM) scheduling
¿ Deadline-monotonic (DM) scheduling

¿ Dynamic-priority scheduling
¿ EDF

69

Rate Monotonic (RM) Scheduling


¿ Static priority scheduling
¿ Rate monotonic Æ priorities are assigned to tasks
according to their request rates.
¿ Each process is assigned a (unique) priority based
on its period
¿ the shorter the period, the higher the priority
¿ Given tasks τi and τj, Ti < Tj Æ Pi > Pj

¿ Intrinsically preemptive:
¿ currently executing task is preempted by a newly
released task with shorter period.

70

35
RM scheduling
¿ RM is proved to be optimal:
¿ If any process set can be scheduled (using preemptive
priority-based scheduling) with a fixed priority-based
assignment scheme, then RM can also schedule the
process set

71

RM schedulability test
¿ Consider only the utilization of the process set
[Liu and Layland, 1973]:
n
 Ci 
Ulub = ∑
1
  ≤ n (2 − 1)
n

i = 1  Ti 

¿ Total utilization of the process set


120.0%

U(n) 100.0%
80.0%

60.0%

40.0%

20.0%

0.0%
0 5 10 15 20 25 30 35
n
72

36
RM schedulability test (2)
¿ For large values of n, the bound asymptotically
reaches 69.3% (ln2)
¿ Any process set with a combined utilization of less
than 69.3% will always be schedulable under RM
¿ NOTE:
¿ This schedulability test is sufficient, but not
necessary
• If a process set passes the test, it will meet all deadlines; if it
fails the test, it may or may not fail at run-time
¿ The utilization-based test only gives a yes/no answer.
• No indication of actual response times of processes!
¿ More significant test: Response-time analysis
73

RM schedulability test: Example (1)


Process Period, Computation Priority, Utilization,
T time, C P U
Task_1 50 12 1 0.24
Task_2 40 10 2 0.25
Task_3 30 10 3 0.33

U = 12/50 + 10/40 + 10/30 = 0.24 + 0.25 + 0.33 = 0.82


U > U(3) = 3 (21/3 –1) = 0.78

10
Task_1

Task_2

Task_3

0 10 20 30 40 50 60 Time

74

37
RM schedulability test: Example (2)
Process Period, Computation Priority, Utilization,
T time, C P U
Task_1 50 32 1 0.400
Task_2 40 5 2 0.125
Task_3 30 4 3 0.250

U = 32/50 + 5/40 + 4/30 = 0.4 + 0.125 + 0.25= 0.775


U < U(3) = 3 (21/3 –1) = 0.78

21 6 5
Task_1

Task_2

Task_3

0 10 20 30 40 50 60 Time

75

RM schedulability test: Example (3)


Process Period, Computation Priority, Utilization,
T time, C P U
Task_1 80 40 1 0.500
Task_2 40 10 2 0.250
Task_3 20 5 3 0.250

U = 1 > U(3) = 3 (21/3 –1) = 0.78

Task_1

Task_2

Task_3

0 10 20 30 40 50 60 70 80 Time
76

38
EDF algorithm
¿ Dynamic scheduling algorithm
¿ Dynamic priority assignment
¿ Idea as for aperiodic tasks:
¿ Tasks are selected according to their absolute deadlines
¿ Tasks with earlier deadlines are given higher priorities

77

Example
Process Period, T WCET, C
T1 5 2
T2 7 4
EDF schedule

T1
0 5 10 15 20 25 30 35

T2
0 7 14 21 28 35

RMA schedule

T1
0 5 10 15 20 25 30 35
Deadline miss
3 1
T2
0 7 14 21 28 35

78

39
EDF schedulability test
¿ Schedulability of a periodic task set scheduled by
EDF can be verified through the processor
utilization factor U
¿ EDF Schedulability Theorem [Liu and Layland, 1973]:
A set of periodic tasks is schedulable with EDF iff:
n
 Ci 
∑ 
i =1  Ti
 ≤1

79

EDF schedulability test: example

Process Period, T WCET, C


T1 5 2
T2 7 4

¿ Processor utilization of the task set:


U = 2/5 + 4/7 = 34/35
¿ U > 69.3%:
• schedulability not guaranteed under RM
¿ U <1:
• schedulability guaranteed under EDF

80

40
Deadline monotonic
(DM) scheduling
¿ Assumption up to now:
relative deadline = period
¿ DM scheduling weakens this assumption
¿ Static algorithm
¿ For DM each periodic tasks τ i is characterized by
four parameters:
¿ (phase φi)
¿ Relative deadline Di (equal for all instances)
¿ Worst case computation time Ci (equal for all instances)
¿ Period Ti

81

DM scheduling

¿ DM = generalization of RM
¿ RMA optimal for D = T
¿ DMA extends this optimality for D < T
¿ Priority of a process inversely proportional to its
deadline
¿ Given tasks τi and τj, Di < Dj Æ Pi > Pj

τi Ci
t
Di

Ti
82

41
DM scheduling: example
¿ Task set not schedulable with RM but schedulable
with DM
Process Period, Deadline, Computation Priority, Response
T D time, C P time, R
Task_1 20 5 3 4 3
Task_2 15 7 3 3 6
Task_3 10 10 4 2 10
Task_4 20 20 3 1 20

83

DM schedulability analysis
¿ Schedulability can be tested replacing the period
with the deadlines in the definition of U
U = Σi=1…n Ci/Di
¿ Too pessimistic!
¿ Actual guarantee test based on a modified
response time analysis
¿ Intuitively: for each τi , the sum of its processing time
and the interference (preemption) imposed by higher
priority tasks must be ≤ Di

Ci + Ii ≤ Di ∀i: 1 ≤ i ≤ n
Ii = Σ(j=1…i-1)  Ri / Tj  Cj
84

42
EDF for D<T
¿ EDF applies as is to the case D < T
¿ Different schedulability test
¿ Based on the processor demand criterion
¿ The processor demand of a task τi in any interval
[t, t+L] is the amount of processing time required
by τi in [t, t+L] that has to be completed at or
before t+L
¿ With deadlines, this is the amount of processing
time required in [t, t+L] that has to be executed
with deadlines ≤ t+L

85

Processor demand for EDF


¿ Applicable also to the case D=T
¿ In general, the schedulability of the task set is
guaranteed iff the cumulative processor demand
in any interval [0, L] ≤ L, the interval length:
n
L
CP (0, L) = ∑   Ci . ≤ L
i =1  Ti 
¿ In the case D<T

∀L ≥ 0
Number of checkpoints
Is actually limited Number of completions
Between 0 and L-Di
86

43
Processor demand for EDF: example
¿ Consider:
Task T D C
τ1 8 7 3
τ2 8 4 2
7 15 23

τ1
4 12 20

τ2
L L
¿ Schedulability test (L=22):
¿ ((22-7)/8+1) • 3 + ((22-4)/8+1) • 2 =
¿ 6 + 6 = 12 < 22 OK
¿ Schedulability test (L=24):
¿ ((24-7)/8+1) • 3 + ((24-4)/8+1) • 2 =
¿ 9 + 6 = 15 < 22 OK
87

Periodic task scheduling:


summary
¿ Restriction on independent and preemptable
periodic tasks
¿ Rate Monotonic (RM) is optimal among fixed
priority assignments
¿ Earliest Deadline First (EDF) is optimal among
dynamic priority assignments
¿ Deadlines = Periods:
¿ guarantee test in O(n) using processor utilization,
applicable to RM and EDF
¿ Deadlines < periods:
¿ polynomial time algorithms for guarantee test
¿ fixed priority: response time analysis
¿ dynamic priority: processor utilization

88

44
Periodic task scheduling:
summary
Di = Ti Di ≤ Ti
RMA DMA

Processor utilization Response time approach


approach
Static  Ri 
Priority n
 Ci  ∀ i , Ri = C i + ∑   C j ≤ Di

1
  ≤ n (2 − 1) j∈hp ( i )  T j 
n

i =1  i 
T
EDF EDF
Processor utilization Processor demand approach
approach

n
 Ci  n 
 L − Di  
Dynamic ∑ T  ≤1 ∀ L > 0, L ≥ ∑    + 1  Ci
Priority i =1  i  i =1   Ti  

89

Priority servers

90

45
Introduction
¿ Requirements in most real-time applications:
¿ periodic and aperiodic tasks
• typically periodic tasks are time-driven, hard real-time
• typically aperiodic tasks are event-driven, soft or hard RT
¿ Main objective for RT kernel:
• guarantee hard RT tasks
• provide good average response time for soft RT
¿ Fixed priority servers:
¿ Periodic tasks scheduled by fixed priority algorithm
¿ Dynamic priority servers:
¿ Periodic tasks scheduled by dynamic priority algorithm

91

Background scheduling
¿ Simplest solution:
¿ Handle soft aperiodic tasks in the background behind
periodic tasks, that is, in the processor time left after
scheduling all periodic tasks.
¿ Aperiodic tasks just get assigned a priority lower than
any periodic one.
¿ Organization of background scheduling:
Periodic tasks
High priority queue RM
CPU
FCFS
Aperiodic tasks
Low priority queue

92

46

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy