RTES10 - Scheduling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

EEE440

Real Time Embedded


Systems
Real Time Operating System
Muhammad Kamran Fiaz (kamran@cuiwah.edu.pk)
We learned so far

 RTES Introduction
 Categories of the RTES
 Properties & Components
 Hardware
 RTOS
Today’s topics

 Tasks Management
Tasks Management

 Task management and scheduling are some of the core functions of any
RTOS kernel.
 Part of the kernel is a scheduler that is responsible for allocating and
scheduling tasks on processors to ensure that deadlines are met.
 Lets learn about some techniques used in Real Time Operating Systems
Some Definitions

 Task
 A task is a unit of work scheduled for execution on the CPU.
 It is a building block of real-time application software supported by an RTOS. In
fact, a real-time application that uses an RTOS can be structured as a set of
independent tasks.
 Different Types of Tasks
 Periodic Tasks
 Aperiodic Tasks
 Sporadic Tasks
Tasks

 Periodic tasks.
 Periodic tasks are repeated once a period, for example, 200 milliseconds.
 They are time-driven. Periodic tasks typically arise from sensory data acquisition,
control law computation, action planning, and system monitoring.
 Such activities need to be cyclically executed at specific rates, which
can be derived from the application requirements.
 Periodic tasks have hard deadlines, because each instance of a periodic task
has to complete execution before the next instance is released. Otherwise, task
instances will pile up.
Tasks

 Aperiodic Tasks
 Aperiodic tasks are one-shot tasks.
 They are event-driven. For example, a driver may change the vehicle’s cruise
speed while the cruise control system is in operation. To maintain the speed set
by the driver, the system periodically takes its speed signal from a rotating
driveshaft, speedometer cable, wheel speed sensor, or internal speed pulses
produced electronically by the vehicle and then pulls the throttle cable with a
solenoid as needed.
 When the user manually changes the speed, the system has to respond to
the change and meanwhile keeps its regular operation.
 Aperiodic tasks either have no deadlines or have soft deadlines.
Tasks

 Sporadic tasks.
 Sporadic tasks are also event-driven.
 The arrival times of sporadic task instances are not known a priori, but there is
requirement on the minimum inter-arrival time. Unlike aperiodic tasks that do not
have hard deadlines, sporadic tasks have hard deadlines.
 For example, when the driver of a vehicle sees a dangerous situation in front of
him and pushes the break to stop the vehicle, the speed control system has to
respond to the event (a hard step on the break) within a small time window.
Task Specifications

 Release time.
 The release time of task is the time when a task becomes available for execution.
The task can be scheduled for execution at any time at or after the release time.
It may not be executed immediately, because, for example, a higher or equal-
priority task is using the processor. The release time of a task Ti is denoted by ri.
 Deadline. The deadline of a task is the instant of time by which its execution must
be completed. The deadline of Ti is denoted by di.
 Relative deadline. The relative deadline of a task is the deadline measured in
reference to its release time. For example, if a task is released at time t and its
deadline is t + 200 milliseconds, then its relative deadline is 200 milliseconds. The
relative deadline of Ti is denoted by Di.
Task Specifications

 Execution time. The execution time of a task is the amount of time that is
required to complete the execution of the task when it executes alone and
has all required resources in place. A task’s execution time mainly depends
on the complexity of the task and the speed of the processor. The
execution time of Ti is denoted by ei.
 Response time. The response time of a task is the length of time elapsed
from the task is released to the execution is completed. For a task with a
hard deadline, the maximum allowed response time is the task’s relative
deadline.
Task Specifications
Task Specifications

 Period. The period of a periodic task is the length of the interval between
the release times of two consecutive instances of the task. We assume that
all intervals have the same length throughout the book. The period of Ti is
denoted by pi.
 Phase. The phase of a periodic task is the release time of its first instance.
The phase of Ti is denoted by ϕi.
 Utilization. The utilization of a periodic task is the ratio of its execution time
over its period, denoted by ui. ui = ei∕pi.
Task Specifcation

 For a periodic task, the task execution time, release time, deadline, relative
deadline, and response time are all referring to its instances.
 We assume that all instances of a periodic task have the same execution
time throughout the book. We specify a periodic task as follows:

Ti = (ϕi, pi, ei,Di)


 For example, a task with parameters (2, 10, 3, 9) would mean that the first
instance of the task is released at time 2, the following instances will arrive
at 12, 22, 32, and so on. The execution time of each instance is 3 units of
time.
Task Specifcations

 If task has a phase 0, then it is specified as


Ti = (pi, ei, Di)
 If the task’s relative deadline is the same as its period, then we can specify
it as follows:
Ti = (pi, ei)
 Hyperperiod: given a set of periodic tasks Ti, i = 1, 2, …, n, we can calculate
their hyperperiod, denoted by H. H is the least common multiple (LCM) of pi
for i = 1, 2, …, n.
Example

 Consider a system of three periodic tasks


T1= (5, 1), T2 = (12, 2), T3 = (40, 1)
 Find the Hyperperiod

 120
Task Specifications

 Criticality. Tasks in a system are not equally important. The relative priorities of
tasks are a function of the nature of the tasks themselves and the current state
of the controlled process.
 The priority of a task indicates the criticalness of the task with respect to other
tasks.
 Preemptivity. Execution of tasks can be interleaved. The scheduler may suspend
the execution of a task and give the processor to a more urgent task. The
suspended task may resume its execution when the more urgent task
completed. Such an interrupt of task execution is called a preemption.
 A task is preemptable if it can resume its execution from the point of interruption. In
other words, it does not need to start over. An example is a computational task on the
CPU.
 On the other hand, a task is nonpreemptable if it must be executed from start to
completion without interruption. If they are interrupted in the middle of execution, they
have to be executed again from the very beginning
Task States

 Running. When a task is actually executing, it is said to be in the running


state. It is currently utilizing the processor. In a single-processor system, only
one task can be in the running state at any given time.

 Ready. Tasks in the ready state are those that are able to execute but are
not currently executing because a different task of equal or higher priority is
in the running state. Ready tasks have all resources needed to run except
the processor. In this state, a task is able to run whenever it becomes the
task with the highest priority among all tasks in this state and the processor is
released. Any number of tasks can be in this state.
Task States

 Blocked. A task is said to be in the blocked state if it is currently waiting for


either a temporal or an external event. For example, if a task calls
taskDelay(), it will block itself until the delay period has expired – timer
expiration is a temporal event. A task that is responsible for processing user
inputs has nothing to do until the user types in something, a case of an
external event. Tasks can also block while they are waiting for RTOS kernel
object events. Tasks in the blocked state are not available for scheduling.
Any number of tasks can be in this state as well.
Task States
Task Precedence

 In addition to timing constraints on tasks in a real-time application, there


might be precedence constraints among tasks. A precedence constraint
specifies the execution order of two or more tasks. It reflects data and/or
control dependencies among tasks.
 For example, in a real-time control system, the control law computation
task has to wait for the results from the sensor data polling task. Therefore,
the sensor data pulling task should run before the control law computation
task in each control cycle.
 If a task A is an immediate predecessor of a task B, we use A < B to
represent their precedence relation. To be more intuitive, we can use a
task graph to show the precedence constraints among a set of tasks.
Nodes of the graph are tasks, and a directed edge from the node A to the
node B indicates that the task A is an immediate predecessor of the task B.
Precedence Graph
Task Scheduling

 Each task executes within its own context. Only one task within the application
can be executing at any point in time on a processor. A schedule is an
assignment of tasks to available processors. The scheduler of an RTOS is a
module that implements the assignment (scheduling) algorithms.
 A schedule is said to be valid if all precedence and resource usage constraints
are met and no task is underscheduled (the task is assigned insufficient time for
execution) or overscheduled (the task is assigned more time than needed for
execution).
 A schedule is said to be feasible if every task scheduled completes before its
deadline.
 A system (a set of tasks) is said to be schedulable if a feasible schedule exists for
the system. Te bulk of real-time scheduling work deals with finding feasible
schedules.
 An algorithm is said to be optimal if it always produces a feasible schedule as
long as a given set of tasks has feasible schedules.
Clock Driven Scheduling

 In the clock-driven scheduling, scheduling decisions are made at specific


time instants, which are typically chosen a priori before the system begins
execution.
 It usually works for deterministic systems where tasks have hard deadlines
and all task parameters are not changing during system operation.
 Clock driven schedules can be computed off-line and stored for use at
runtime, which significantly saves runtime scheduling overhead.

Clock Driven Scheduling

T1= (4, 1), T2 = (6, 1), T3 = (12, 2)


Hyperperiod: ?
T1= (4, 1), T2 = (6, 1), T3 = (12, 2)
Feasibility Test
 The schedules are in the interval from 0 to 12 units of time, the first hyperperiod of the system.
During this time period, there are three instances from T1, two instances from T2, and one
instance from T 3. Let us analyze the first schedule and see why it is feasible.
 Consider T1 first. At time 0, T1 has its first instance released with a deadline at time 4. It is
scheduled for execution at time 0 and completed at time 1. So, it meets its deadline.
 At time 4, the second instance of T1 is released with a deadline at time 8. It is scheduled for
execution at time 4 and completed at time 5. It meets its deadline.
 At time 8, the third instance of T 1 is released with a deadline at time 12. It is scheduled for
execution at time 8 and completed at time 9. It meets its deadline.
 Now let us consider T2. T2 has its first instance released at time 0. Its deadline is at time 6. It is
scheduled for execution at time 1 and completed at time 2. So,it meets its deadline.
 At time 6, the second instance of T2 is released with a deadline at time 12. It is
scheduled for execution at time 6 and completed at time 7. It meets its deadline.
 The only instance in T3 is released at time 0 with a deadline at time 12. It is scheduled for
execution at time 2 and completed at time 4. So, it meets its deadline.
 Since all task instances released in the first hyperperiod of the system meet
their deadlines, the schedule is feasible.
Feasible Schedules
Structured Clock Driven Scheduling

 Although the schedules in the last slide are feasible, scheduling decision
points are randomly scattered.
 There is no pattern in the time points at which new tasks are selected for
execution.
 The idea with structured clock-driven scheduling approach is that
scheduling decisions are made at periodical, rather than arbitrary, times.
 This way, a timer that expires periodically with a fixed length of duration
can be used for decision times.
Frames – Structured Scheme

 In a structured schedule, the scheduling decision times partition the time line
into intervals called frames.
 Each frame is of a length f , which is called frame size. There is no preemption
within a frame, because scheduling decisions are only made at the beginning
of each frame.
 To ease the schedule construction, the phase of each periodic task is a
nonnegative multiple of the frame size. In other words, the first instance of each
periodic task is released at the beginning of some frame.
 When we select a frame size, it should be big enough to allow every task to
complete its execution inside a frame.
 Assume that there are n periodic tasks in a system. This constraint can be
expressed mathematically as
f ≥ max{ei , i = 1, 2, … , n}
Frames – Structured Scheme
 To minimize the number of entries in a schedule table to save storage space,
the chosen frame size should divide the major size.
 Otherwise, storing the schedule for one major cycle won’t be sufficient,
because the schedule won’t simply repeat from one major cycle to another,
and thus, a larger scheduling table is needed. This constraint can be formulated
as
H mod f = 0
 Since H is a multiple of every task’s period, if f can divide H, then f must also
divide the period of at least one task in the system, that is,

pi mod f = 0, ∃ i ∈ {1, 2, … , n}
Frames

 A task instance arrives at time kf + Δt, where Δt < f . Its deadline d is between (k + 1)f and (k + 2)f .
 This task instance will be scheduled for execution at the earliest (k + 1)f , the beginning of the next
frame after it arrives. Before its deadline, it can only be executed for d - (k + 1)f units of time,
which is less than the frame size.
 If the execution time is close or equal to the frame size, then the task instance cannot finish
execution before its deadline.
there must be a full frame between
di - (kf + Δt) ≥ f + (f - Δt)
the arrival of a task
di - (kf + Δt) = Di instance and its deadline
2f - Δt ≤ Di
 Since the first instance of the task is released at the beginning of a frame, the minimum of Δt is
the greatest common devisor (GCD) of pi and f . The minimum of Δt corresponds to the worst
case, that is, the task instance has the
greatest chance to miss its deadline
2f - GCD(pi, f ) ≤ Di, i = 1,2, … , n.
Example
T1 = (4, 1), T2 = (5, 1), T3 = (10, 2).

1
f ≥ max{ei , i = 1, 2, … , n}  According to the first constraint, f ≥ 2
2
H mod f = 0  Major cycle H=20 (Hyperperiod)
3
2f - GCD(pi, f ) ≤ Di  According to the second constraint, f should divide 20. Possible frame sizes
are 2, 4, 5, and 10. We don’t need to consider 1, because it violates the first
constraint.
 Now we need to test 2, 4, 5, and 10 with the third constraint 2f - GCD(pi, f )
≤ Di for each task.
Consider f = 2 first. For T 1 = (4, 1), 2f - GCD(p1, f ) = 2 ∗ 2 - GCD(4, 2) = 4 - 2 = 2
while D1 = 4. Therefore, the constraint is satisfied by T1.
For T2 = (5, 1), 2f - GCD(p2, f ) = 2 ∗ 2 - GCD(5, 2) = 4 - 1 = 3,
while D2 = 5. Therefore, the constraint is satisfied by T2.
For T3 = (10, 2), 2f - GCD(p3, f ) = 2 ∗ 2 - GCD(10, 2) = 4 - 2 = 2,
while D3 = 10. Therefore, the constraint is satisfied by T 3.
Assignment

 Reading Assignment
 Write a short description and working with an example of the following
 Scheduling Aperiodic Tasks (§4.2.2)
 Scheduling Sporadic Tasks (§4.2.3)

 Due : June 05
Round Robin Approach

 The round-robin approach is a time-sharing scheduling algorithm. In the


round-robin scheduling, a time slice is assigned to each task in a circular order.
 Tasks are executed without priority. There is an FIFS (first-in-first-service)
queue that stores all tasks in the ready state.
 Each time, the task at the head of the queue is removed from the queue and
dispatched for execution. If the task is not finished within the assigned time slice,
it is placed at the tail of the FIFS queue to wait for its next turn.
 Normally, the length of the time slice is short, so that the execution of every task
appears to start almost immediately after it is ready.
 Since each time a task only gets a small portion executed, all tasks are assumed
to be preemptable, and context switches occur frequently in this
scheduling approach.
Round Robin Approach

 The round-robin scheduling algorithm is simple to implement. It is fair to all


tasks in using the processor. The major drawback is that it delays the
completion of all tasks and may cause tasks to miss their deadlines. It is not
a good option for scheduling tasks with hard deadlines.

 For Real Time Applications???


Priority Driven Scheduling

 In contrast to the clock-driven scheduling algorithms that schedule tasks


at specific time points off-line, in a priority-driven scheduling algorithm,
scheduling decisions are made when a new task (instance) is released or a
task (instance) is completed.
 It is online scheduling, and decisions are made at runtime.
 Priority is assigned to each task. Priority assignment can be done statically
or dynamically while the system is running.
 A scheduling algorithm that assigns priority to tasks statically is called a
static-priority or fixed-priority algorithm, and an algorithm that assigns
priority to tasks dynamically is said to be dynamic-priority algorithm.
Priority Driven Scheduling

 Priority-driven scheduling is easy to implement. It does not require the


information on the release times and execution times of tasks a priori.
 The parameters of each task become known to the online scheduler only
after it is released.
 Online scheduling is the only option in a system whose future workload is
unpredictable.
Priority Driven Scheduling

 Assumptions
1. There are only periodic tasks in the system under consideration.
2. The relative deadline of each task is the same as its period.
3. All tasks are independent; there are no precedence constraints.
4. All tasks are preemptable, and the cost of context switches is negligible.
5. Only processing requirements are significant. Memory, I/O, and other
resource requirements are negligible
Fixed Priority Algorithms (Rate
Monotonic Algorithm)
 The algorithm assigns priorities based on the period of tasks.
 Given two tasks Ti = (pi, ei) and Tj = (pj, ej), if pi < pj, then Ti has higher priority
than Tj
 When a new task instance is released, if the processor is idle, it executes the
task; if the processor is executing another task, then the scheduler
compares their priorities.
 If the new task’s priority is higher, then it preempts the task in execution
and executes on the processor.
 The preempted task is placed in the queue of ready tasks.
RM Scheduling Example
T1 = (4, 1), T2 = (5, 1), T3 = (10, 3)

 Because p1 < p2 < p3, T1 has the highest priority and T 3 has the lowest priority.
 Whenever an instance from T1 is released, it preempts whatever is running on
the processor and gets executed immediately. Instances from T 2 run in the
“background” of T1. Task T3, on the other hand, cannot execute when either T1
or T2 is unfinished.
 We don’t need to examine T1, because whenever an instance is released, it
gets executed on the processor immediately.
 For T2, there are four instances released in the first major cycle, and they all get
executed before their deadlines.
 For T3, the first instance is completed at 7, which is before its deadline 10. The
second instance is released at 10 and completed at 15, which is also before its
deadline 20.
 Therefore, the three periodic tasks are schedulable based on RM
RM Scheduling
Time Demand Analysis
T1 = (4, 1), T2 = (5, 2), T3 = (10, 3.1).
Time Demand Analysis

 If the total utilization of a set of periodic tasks is not greater than


URM(n) = n(21∕n - 1),
 where n is the number of tasks, then the RM algorithm can schedule all the
tasks to meet their deadlines.

T1 = (4, 1), T2 = (5, 1), T3 = (10, 3)

 n=3, URM(3) = 0.78


 In reality, it is 0.75
 Which means it is schedulable
More to come

 Dynamic Priority Scheduling

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy