Ch7 FreeRTOS
Ch7 FreeRTOS
Ch7 FreeRTOS
1.3
Coursework
1.4
Language Prerequisites
#include “sysdefs.h”
All RTOS we consider are written #define NUMTASKS 10
in C. #define MAXPRIO (NUMTASKS-1)
This course assumes you are
familiar with C syntax
Pseudocode will be written using #define portEXIT_SWITCHING_ISR( SwitchRequired ) \
C constructs if( SwitchRequired ) \
RTOS source code makes use of { \
C macros to implement simple vTaskSwitchContext(); \
operations } \
Macros use textual expansion to } /*this closing brace must match with { in ENTER macro*\ \
provide similar feature to functions portRESTORE_CONTEXT();
but without time overhead of
function call #if( configUSE_16_BIT_TICKS == 1 )
RTOS make use of C conditional typedef unsigned portSHORT portTickType;
compilation to customise code #define portMAX_DELAY ( portTickType ) 0xffff
C preprocessor references: #else
Introduction in C web reference on typedef unsigned portLONG portTickType;
previous slide #define portMAX_DELAY ( portTickType ) 0xffffffff
Comprehensive reference #endif
http://en.wikipedia.org/wiki/C_preprocessor
1.6
Language Prerequisites (2)
RTOS implementation is based on objects (tasks, semaphores, etc) which are
defined by a C structure which implements an object control block. Control
blocks are invariably accessed via pointers.
So a newly created task will be referenced via a pointer to its task control block
In FreeRTOS these pointers have type void * (pointer to anything). No type checking!
Void * pointer is given type name xTaskHandle
RTOS internals are full of pointers between structures
C uses consistent but very confusing syntax for describing pointers &
pointer types. Revise/learn this on a "need-to-know" basis.
You may need to read about this at some time during the coursework when you get
confused. Depending on your background try one of:
2 page C reference card (summary of syntax will not help understanding)
C FAQ (Ch 4 on pointers). If you can answer these questions you don't need to read
further!
C Pocket reference (on textbook slide). Good concise reference not for the beginner.
Great intro to pointers & local memory, mostly language-independent but with
examples in C from Nick Parlante Stanford. 31 pages, but comprehensive. Explains
pointer diagrams etc. Good for those new to pointers, but fast.
Long (53 pages) tutorial (Jensen) on pointers, arrays & memory alloc in C, as above
but longer.
1.7
Jargon used in these lectures
1.8
Lecture 1: Introduction
“I love deadlines. I like the whooshing sound they make as they fly by.”
Douglas Adams
1.9
Multitasking vs true concurrency
1.12
A Simple real-time system
The LPC2138 boards you will use for practical work have
keys, and LCD, and both digital-to-analog output and
analog-to-digital input.
One real-time task is to ensure that key-strokes are
processed and the LCD updated within 50ms. Anything
slower than 50ms is a noticeable delay and not acceptable.
Any time between 0 and 50ms is OK.
A Task is implemented by a function with an infinite loop.
In these lectures we will use C language with pseudocode
description of operations to illustrate code
Variables and functions will use Hungarian notation, where the type
is indicated by prefix characters in name.
This is difficult at first, but easy to get used to and makes understanding
code easier.
1.13
Task pseudocode
pv=>pointer to void
v=>void (no In C pointer to nothing means
return value) pointer which can point to anything
1.14
Another task
1.15
Execution Trace
The way in which these two tasks behave can be analysed by looking at
an execution trace of the system
The idle task is added by the RTOS to ensure that something is executing even
when the application tasks are both suspended
The control task must be higher priority since has shorter deadline
500us vs 50ms
1.16
Execution trace details…
a) At the start neither of our two tasks are able to run - vControlTask is waiting for the
correct time to start a new control cycle and vKeyHandlerTask is waiting for a key to be
pressed. Processing time is given to the idle task.
b) At time t1, a key press occurs. vKeyHandlerTask is now able to execute—it has a higher
priority than the idle task so is given processing time.
c) At time t2 vKeyHandlerTask has completed processing the key and updating the LCD. It
cannot continue until another key has been pressed so suspends itself and the idle task is
again resumed.
d) At time t3 a timer event indicates that it is time to perform the next control cycle.
vControlTask can now execute and as the highest priority task is scheduled processing
time immediately.
e) Between time t3 and t4, while vControlTask is still executing, a key press occurs.
vKeyHandlerTask is now able to execute, but as it has a lower priority than
vControlTask it is not scheduled any processing time.
1.17
… Cont’d
f) At t4 vControlTask completes processing the control cycle and cannot restart until the
next timer event—it suspends itself. vKeyHandlerTask is now the task with the highest
priority that is able to run so is scheduled processing time in order to process the previous
key press.
g) At t5 the key press has been processed, and vKeyHandlerTask suspends itself to wait
for the next key event. Again neither of our tasks are able to execute and the idle task is
scheduled processing time.
h) Between t5 and t6 a timer event is processed, but no further key presses occur.
i) The next key press occurs at time t6, but before vKeyHandlerTask has completed
processing the key a timer event occurs. Now both tasks are able to execute. As
vControlTask has the higher priority vKeyHandlerTask is suspended before it has
completed processing the key, and vControlTask is scheduled processing time.
j) At t8 vControlTask completes processing the control cycle and suspends itself to wait for
the next. vKeyHandlerTask is again the highest priority task that is able to run so is
scheduled processing time so the key press processing can be completed.
1.18
Simple Real-Time Application Design –
“outside-in” approach
Decompose computation into
multiple tasks.
Move from peripherals
inwards, specifying a
separate task for each
independent I/O device and LCD
adding extra tasks as
necessary to perform Key
T1
associated computation control
Additional tasks may not be
needed, e.g. previous
example uses just two tasks T2
(dotted lines)
Good design uses minimum D/A
A/D
number of tasks necessary output
input
to represent concurrent
operations
1.19
Real-Time Task Structure
In a normal OS tasks often run
continuously without blocking till
endpoint.
In an RTOS typically tasks will Task()
run in response to an event, and
then block until the next event {
happens [ initialise ]
Real-time tasks spend most time
blocked for (;;) {
Most real-time tasks run forever [ wait for event ]
All tasks must block (except the
lowest priority task) in order to
[ process event ]
allow lower priority tasks to run. [ signal other tasks ]
Total CPU utilisation must be <
100%
}
Lowest priority Idle Task may run }
continuously "soaking up" any
spare CPU
Avoids special case of no task
running
1.20
Real-Time System Design: Summary
Design Lecture
1.7
application specify determine
specification application task
and deadlines tasks priorities
1.21
Summary
Now the earth was formless and empty. Darkness was on the surface of the deep.
Genesis 1:2
How to use FreeRTOS
Startup & task creation
Startup task
Stacks
Timing functions
Delaying the current task
Controlling the scheduler
Changing task priorities
Suspending and restarting the multi-tasking
1.25
FreeRTOS
1.26
FreeRTOS tasks
1.27
Task Creation
#include "freertos.h" /*FreeRTOS type definitions etc */
x = xTaskCreate(
vLcdDriveTask, /*task function*/
"LCD", /* name for debugging */
Task handle optionally
mainLCD_STACK_SIZE,
returned in a variable. NULL, /*pointer to parameters, not used in this task so NULL*/
Note & makes pointer to mainLCD_TASK_PRIORITY,
allow call-by-reference &task1 /* replace by NULL if task handle not required */
);
Return value is either
pdPASS or an error code if (x!=pdPASS) hwLcdFatalError("Task creation failed");
(see projdefs.h)
#include "freertos.h" /*FreeRTOS type definitions etc */
Task Function definition
void vLcdDriveTask(void * pvParameters); Function prototype needed if function
definition is after its use
int main(void)
{
[create the task]
}
#define configUSE_PREEMPTION 1
#define configCPU_CLOCK_HZ ( ( unsignedlong ) 72000000 )
#define configTICK_RATE_HZ ( ( portTickType ) 1000 )
#define configMAX_PRIORITIES ( ( unsigned portBASE_TYPE ) 5 )
#define configMINIMAL_STACK_SIZE ( ( unsignedshort ) 120 )
#define configTOTAL_HEAP_SIZE ( ( size_t ) ( 18 * 1024 ) )
#define configMAX_TASK_NAME_LEN ( 16 )
#define configUSE_TRACE_FACILITY 1
#define configIDLE_SHOULD_YIELD 1
#define configUSE_MUTEXES 1
#define configUSE_COUNTING_SEMAPHORES 1
#define INCLUDE_vTaskPrioritySet 1
#define INCLUDE_vTaskDelayUntil 1
#define INCLUDE_vTaskDelay 1
Just a quick overview of these. We will use preemption, so we set it to 1, then we select the CPU clock rate, which is 72MHz, also we
configure the tick timer, which means that the scheduler will run every 1ms.
Then we select a minimum stack size for a task and set a total heap size.
Our code is going to use task priorities, so we set vTaskPrioritySet to 1. Also, we are going to use vTaskDelay utilities that help with task
timing. So we select them too.
There are a lot more settings you’ll find in the Config file. Many of them are self-explanatory, but checking their meaning before using
them as setting one or another may significantly increase in ram or CPU usage.
//STM32F103ZET6 FreeRTOS Test
#include "stm32f10x.h"
//#include "stm32f10x_it.h"
#include "mytasks.h"
//task priorities
#define mainLED_TASK_PRIORITY ( tskIDLE_PRIORITY )
#define mainButton_TASK_PRIORITY ( tskIDLE_PRIORITY )
#define mainButtonLEDs_TASK_PRIORITY ( tskIDLE_PRIORITY + 1 )
#define mainLCD_TASK_PRIORITY ( tskIDLE_PRIORITY )
#define mainUSART_TASK_PRIORITY ( tskIDLE_PRIORITY )
#define mainLCD_TASK_STACK_SIZE configMINIMAL_STACK_SIZE+50
#define mainUSART_TASK_STACK_SIZE configMINIMAL_STACK_SIZE+50
int main(void)
{
//init hardware
LEDsInit();
ButtonsInit();
LCD_Init();
Usart1Init();
//start scheduler
vTaskStartScheduler(); //After this, the highest priority task will be implemented
1.32
The optimizing of the stack size for the FreeRTOS task
As you can see, we have created 5 tasks. Each of them has a priority level and stack size. The hardest
part is defining proper stack size – if it’s too small, it may crash your program; if it’s too large, then we
are wasting limited resources of the microcontroller. To detect stack overflow, you can use a particular
function called
In this function, you can set up any indicators such as LED flash whenever stack overflow occurs. This
way, you can leverage the stack size for the task and start over.
FreeRTOS API – Task Control in Detail
Task create/delete Call to
xTaskCreate vTaskSuspend() Suspended Call to
xTaskDelete vTaskSuspend()
Task sleeping
vTaskDelay() Call to Call to
vTaskDelayUntil() vTaskSuspend() vTaskResume()
Kernel control
Scheduled
vTaskAllSuspend() READY RUNNING
vTaskAllResume() Preempted
portENTER_CRITICAL()
portEXIT_CRITICAL() Event or
Tick Call to
Task suspension blocking API
xTaskSuspend(xTaskHandle h) BLOCKED function:
xTaskResume(xTaskHandle h) waiting event vTaskDelay()
or timeout xSemaphoreTake
etc
1.33
Creation
1.34
Writing a FreeRTOS task routine
If you are familiar with the RTOS concept, you know that the program written for
FreeRTOS is organized as a set of independent tasks.
Each task normally is not directly related to other tasks and run within its context.
Practically speaking, the task is a function with its own stack and runs a separate small
program.
When multiple tasks are created, a scheduler switches between tasks according to assigned
priorities. The task itself is a function with an endless loop, which never returns from it:
Task()
{
for (;;) {
[ do something ]
vTaskDelay(2); /* wait 2 clock ticks */
}
}
1.35
Difference Bet. vTaskDelay & vTaskDelayUntil
In vTaskDelay you say how long after calling vTaskDelay you want to be woken.
Task delay in detail In vTaskDelayUntil you say the time at which you want to be woken.
The parameter in vTaskDelay is the delay period in the number of ticks from now.
The parameter in vTaskDelayUntil is the absolute time in ticks at which you want to be
woken calculated as an increment from the time you were last woken.
Sleeping
D W D W D W D - TaskDelay(2);
Running W – Wakeup from tick
S
Ready S - Scheduled
1.36
Task Delay – xTaskDelayUntil()
Purpose – delay execution until a specific time to ensure accurate
periodic execution.
1.37
Let’s write a simple LED flasher task. This is a basic routine that flashes LED every 1s.
This is how the tasks look like:
To set the timing, we are using the vTaskDelayUntil function. FreeRTOS counts ticks
every time scheduler is called (every 1ms by default). By setting the frequency value to
1000, we are getting a 1s delay.
Read an Analog Input from a device and subsequently drive a motor.
Also want to send an SMS using a GSM Module every 20 seconds.
analogMotorTask()
{
// This occurs only once
// Initialization of Analog and Motor here
initAnalogPin();
initMotor();
// This occurs once every second
while(1)
{
// Read Analog Input
analogRead();
// Signal the Motor
signalMotor();
// Block AnalogMotorTask function for 1 second
vTaskDelay(1000);
}
}
sendSMS()
{
// This occurs only once
// Initialization of GSM here
initGSM();
// This will occur every 20 seconds
while(1)
{
// Send SMS
sendingSMS();
// Block sendSMS function for 20 seconds
vTaskDelay(20000);
Use of vTaskDelete()
/* Scheduler include files. */
#include "FreeRtOSConfig.h";#include "FreeRTOS.h";#include "task.h";#include "croutine.h";#include "uart.h" // Explore Embedded UART library
/* Create the three tasks with priorities 1,2,3. Only tasks will be created.
* Tasks will be excecuted once the scheduler is started.
* An idle task is also created, which will be run when there are no tasks in RUN state */
xTaskCreate( MyTask1, ( signed char * )"Task1", configMINIMAL_STACK_SIZE, NULL, 1, &TaskHandle_1 );
xTaskCreate( MyTask2, ( signed char * )"Task2", configMINIMAL_STACK_SIZE, NULL, 2, &TaskHandle_2 );
xTaskCreate( MyTask3, ( signed char * )"Task3", configMINIMAL_STACK_SIZE, NULL, 3, &TaskHandle_3 );
xTaskCreate( MyIdleTask, ( signed char * )"IdleTask", configMINIMAL_STACK_SIZE, NULL, tskIDLE_PRIORITY, NULL );
while(1);
}
Task switching depending on the priorities
xTaskHandle TaskHandle_1; xTaskHandle TaskHandle_2; xTaskHandle TaskHandle_3;
xTaskHandle TaskHandle_4; xTaskHandle TaskHandle_5;
while(1);
}
static void MyTask1(void* pvParameters)
{
LED_PORT = LED_Task1; /* Led to indicate the execution of Task1*/
UART_Printf("\n\rIn Task1");
vTaskDelete(TaskHandle_1);
}
vTaskDelete(TaskHandle_3);
}
static void MyTask4(void* pvParameters)
{
LED_PORT = LED_Task4; /* Led to indicate the execution of Task4*/
UART_Printf("\n\rIn Task4");
vTaskDelete(TaskHandle_4);
}
while(1);
}
static void MyTask1(void* pvParameters)
{
LED_PORT = LED_Task1; /* Led to indicate the execution of Task1*/
UART_Printf("\n\rIn Task1");
vTaskDelete(TaskHandle_1);
}
static void MyTask2(void* pvParameters)
{
LED_PORT = LED_Task2; /* Led to indicate the execution of Task2*/
UART_Printf("\n\rIn Task2, waiting for some time");
vTaskDelay(200);
LED_PORT = LED_Task2; /* Led to indicate the execution of Task2*/
UART_Printf("\n\rBack in Task2");
vTaskDelete(TaskHandle_2);
}
vTaskDelete(TaskHandle_3);
}
static void MyTask4(void* pvParameters)
{
LED_PORT = LED_Task4; /* Led to indicate the execution of Task4*/
UART_Printf("\n\rIn Task4, waiting for some time");
vTaskDelay(200);
LED_PORT = LED_Task4; /* Led to indicate the execution of Task4*/
UART_Printf("\n\rBack in Task4");
vTaskDelete(TaskHandle_4);
}
static void MyTask5(void* pvParameters)
{
LED_PORT = LED_Task5; /* Led to indicate the execution of Task4*/
UART_Printf("\n\rIn Task5, waiting for some time");
vTaskDelay(200);
LED_PORT = LED_Task5; /* Led to indicate the execution of Task4*/
UART_Printf("\n\rBack in Task5");
vTaskDelete(TaskHandle_5);
}
static void MyIdleTask(void* pvParameters)
{
while(1)
{
LED_PORT = LED_IdleTask; /* Led to indicate the execution of Idle Task*/
UART_Printf("\n\rIn idle state");
}
}
200 ticks
200 ticks
200 ticks
Kernel Control
FreeRTOS has two ways for a task to ensure
temporarily private access to data or hardware
resources.
Critical Sections
A section of code requiring this is called a
critical section
1. Switch off all interrupts (interrupt-lock)
Advantages portENTER_CRITICAL();
Nothing can preempt the current task
No ISR will run [Private access to data]
Fast to implement
Disadvantages
Long times will impact ALL interrupt times portEXIT_CRITICAL();
Therefore only useful for short critical sections
1.38
Nesting critical sections
The use of functions to enter & exit critical
sections raises one subtle but important issue.
f1()
What happens when critical sections are
{
nested?
portENTER_CRITICAL();
The desired behaviour is that the nested f2();
portEXIT_CRITICAL() or TaskResumeAll() /* want this still to be critical */
commands do NOT exit the critical section. portEXIT_CRITICAL();
This must happen only on the outermost exit. /* no longer critical after
This allows functions define critical sections to outermost exit */
call other functions which define critical }
sections internally.
For example RTOS functions may be called f2()
which themselves define critical sections. {
Note that blocking RTOS functions, like portENTER_CRITICAL();
vTaskDelay(), must never be called inside /* nested critical section */
critical sections. portEXIT_CRITICAL();
FreeRTOS critical section commands are all }
nestable in this way – but check RTOS
documentation before assuming this for other
RTOS
1.39
Task priorities & scheduling
The scheduler uses task priorities to determine which task to run whenever multi-
tasking is enabled (ie not in a critical section)
The scheduler will ensure that the currently running task is always the highest priority
READY task.
Making ready a high priority task will therefore cause preemption of the current task.
If there are multiple READY tasks of equal highest priority the scheduler will time-slice
between them
See slide 1.123
Priorities can be inspected and changed dynamically with API functions as below.
A task can change its own priority without storing its task handle, by using NULL as first
parameter.
{
xTaskHandle *xh1; /* variable to store task handle of newly created task*/
int uxCurrPriority = uxTaskPriorityGet(NULL); /*store priority of this task*/
/*create new task, handle xh1, priority 1 more than this task*/
xCreateTask(vT1Code, "T1", 100, NULL, uxCurrPriority+1, &xh1 );
xTaskDelay(2); /*wait between 1 & 2 clock ticks*/
vTaskPrioritySet( xh1, uxCurrPriority-1 ); /*decrease priority of new task*/
}
1.40
Determining task priorities
1.41
Lecture 2: Summary
Task create/delete Kernel is all the RTOS code which implements the API
xTaskCreate() A single startup function creates tasks and then calls the
xTaskDelete() FreeRTOS scheduler to initiate multi-tasking.
Task sleeping Tasks have states: RUNNING, READY, BLOCKED,
vTaskDelay()
SUSPENDED. (SUSPENDED is special case of
vTaskDelayUntil()
BLOCKED requiring explicit Resume() to restart)
Kernel control
The FreeRTOS scheduler uses task priorities, with
vTaskSuspendAll()
xTaskResumeAll()
equal priority scheduling time-sliced between tasks.
portENTER_CRITICAL() Tasks can be preempted when higher priority tasks
portEXIT_CRITICAL() become READY.
vTaskPrioritySet() API functions exist to create, delete, delay tasks. Tasks
uXTaskPriorityGet() are referenced via task handles.
Task suspension Critical sections can be made by temporarily stopping
vTaskSuspend() interrupts, or disabling scheduling.
vTaskResume() The API contains delay functions which make tasks
sleep (block) for given number of clock ticks, or till a
specified clock tick.
Tasks can dynamically control priorities of themselves,
or any other task where they know the task handle.
1.42
Review Questions 2
1.43
1.45
Lecture 3: The Shared Data Problem
Collecting data is only the first step toward wisdom, but sharing data is the
first step toward community.
Henry Louis Gates Jr.
1.46
How to think about concurrent programming
1.47
Task execution interleaving
Problem
Implement function RecordError() which can be called from multiple
tasks and counts the total number of times it is called.
Single-threaded solution
static int prvErrorCount = 0; /* initialise to 0 */
vRecordError()
{ /* this function assumes overflow will never happen */
int x;
x = prvErrorCount;
prvErrorCount = x + 1;
return x;
}
ErrorCount variable is shared data
1.49
RecordError() is not safe when multi-tasking
vRecordError()
Note here that the two {
local x variables are int x;
separate x = ErrorCount;
[ Task 2 preempts
Each task has a vRecordError()
task 1]
separate stack. {
However the relative int x;
timing shown here x = ErrorCount;
means that two calls ErrorCount := x + 1;
}
result in ErrorCount
[ Task 2 blocks ]
only increasing by 1
Error! ErrorCount := x + 1;
}
Task 1 Task 2
1.50
Curse of the infrequent error
1.51
Atomic operations
The solution to this problem is to recognise that the read and write
operations on prvErrorCount must be atomic (executed without
interruption) for correct operation. This can be enforced in an
RTOS by making the code a critical section
Kernel will not allow task switch during critical section
static int prvErrorCount = 0; /* initialise to 0 */
Safe
called void vRecordError(void)
from { /* this function assumes overflow will never happen */
multiple int x;
tasks portENTER_CRITICAL;
x = prvErrorCount; Critical section creates
prvErrorCount := x + 1; atomic read-and-
portEXIT_CRITICAL; increment operation
}
1.52
Assignment statements are not atomic
1.53
Re-entrant Functions
static int prvErrorCount = 0;
Functions which are safe to call from
multiple tasks are called re-entrant.
static int x;
vRecordError() with a critical section
protecting the memory read/write is
re-entrant vRecordError()
{ /* this function assumes
To be re-entrant a function must:
overflow will never happen */
Store intermediate results on its stack,
not in static or global variables portENTER_CRITICAL;
Ensure all atomic operations on static x = prvErrorCount;
or global variables or hardware are prvErrorCount := x + 1;
protected by critical sections portEXIT_CRITICAL;
Only call other functions which are re- }
entrant
NB – a function can always be made Critical section protects both x
re-entrant by enclosing its entire body & prvErrorCount so this is re-
in a critical section entrant.
If the function is potentially long this NB why does x static here
solution is undesirable. mean less RAM use than x as a
If function can BLOCK this is an error local (previous slides)?
1.54
Is printf re-entrant?
You cannot assume the implementation will be re-entrant – printf
may use static storage for intermediate results which will be shared
between different concurrent function calls
Even if this is OK, the results of printing out from two different tasks
may be that the individual characters printed are interleaved.
This is usually not the desired behaviour
In general do not assume that standard library functions are re-
entrant unless this is stated.
You can always create re-entrant wrappers to non-re-entrant
functions:
unsigned int urand_safe(void)
{
int x; /* why must x be local here?*/
portENTER_CRITICAL;
x = rand(); /*library random number routine*/
portEXIT_CRITICAL;
return x;
1.55
Inconsistent Data Structures
A common cause of the shared
MonitorTask() data problem is when a data
{ structure is updated inconsistently.
for (;;) {
In the code below ControlTask()
[ read temperatures ]
Temp1 = [ first tank temperature ] expects Temp1 & Temp2 to be
Temp2 = [ second tank temperature ] temperatures read in the same
vTaskDelay(100); iteration of MonitorTask().
}
} This will not in general be the case
Here Temp1, Temp2 form a data
structure which must remain in a
consistent state when it is read or
ControlTask() written.
{
for (;;) { The two reads in ControlTask()
If (Temp1 != Temp2) [ sound a buzzer] must be atomic
[do other stuff] The two writes in MonitorTask()
} must be atomic
}
1.56
Case study: Best Fit Memory Allocation
1.57
Memory Allocation (2)
FreeList: next: next: next: NULL
Arraysize: 16 Arraysize: 8 Arraysize: 12
Use a linked list of memory
blocks FreeList. Each list Free
node contains an array of n memory Free
bytes (the memory that can be Free memory
allocated) together with an memory
arraysize indication (set equal
to n) and pointer to the next
list node. 16 words
Initially all free memory
(assumed contiguous) forms a
single node.
Optimisation: store list sorted
by arraysize, smallest first
1.58
Memory allocation (3)
best fit block:
next: next: Old node has size field
changed and its free
Arraysize: 16 Arraysize: 10 memory is given to
requesting task.
1.61
Concurrent code to allocate memory
void *MallocWord( int n)
/* return a pointer to a block of memory size n words */
{ This
Fnode *p, *q = NULL; /* return Null if no mem left */ implementation
assumes that
TaskSuspendAll(); MallocWord() is
p = [ smallest node in freeList with never used in an
arraysize >= n, if one such exists ]; interrupt.
if (p != NULL) [ remove p from Freelist ];
TaskResumeAll(); What change
If ([ space left for another block in p mem array ]) { would have to be
q = [ new Freenode structure of correct size ]; made if it were?
[ adjust arraysize field of p ];
[ insert q into Freelist - atomic (see next slide) ];
}
if (p !=NULL) { /*check whether block was allocated */
return p->mem; /*if so, return pointer to first word of free memory */
} else {
return NULL; /* if not, return NULL */
}
1.62
Free list node insertion
/* the insertion is very fast so switch off interrupts for critical section */
enterCRITICAL_SECTION();
q->next = Freelist;
FreeList = q;
exitCRITICAL_SECTION();
1.63
Lecture 3: Summary
1.64
Review Questions 3
3.1 In slide 1.55 explain why both the two reads AND the two writes
must be atomic by giving in each case an execution trace that
leads to an error otherwise.
3.2 In a priority-scheduled RTOS the highest-priority READY task will
always run. Suppose in the problem from slide 1.55 you may
assume that MonitorTask() is higher priority than ControlTask().
How does this change the necessary critical section code in either
task? Explain your reasoning.
3.3 Shared-data problems are important and a common source of
bugs in RTOS applications. Problem Sheet 1 contains further
examples.
1.65
1.68
Lecture 4: Semaphores & resource access
"We semaphore from ship to ship, but they're sinking, too." Mignon McLaughlin
The previous lecture showed how shared data structures in
memory need private access from tasks to maintain consistency
Implemented via critical sections which enforce atomicity of
operations
This is a brute-force way to enforce exclusive access. Switching
off task preemption (or, more drastically, interrupts) for long
periods of time is not feasible, since it blocks ALL other system
tasks
We need a more selective way of blocking just the tasks
that try to use the shared resource
This applies to many different types of resource: hardware, data
structure, software.
The solution is to use semaphores/Mutex
- Semaphore: một cơ chế (key, cờ, biến, ...) giúp quản lý các nguồn chia sẻ và đảm bảo access không bị tắc
nghẽn. Nó được dùng để đồng bộ giữa các tasks và giữa các tasks và ngắt
- Mutex (Mutual exclusion) là một cờ/key/object loại trừ lẫn nhau. Nó hoạt động như một người giữ cổng cho
một phần mã, tài nguyên cho phép một luồng vào và chặn truy cập đến tất cả các luồng khác. Điều này đảm
bảo rằng mã, tài nguyên được kiểm soát sẽ chỉ được truy cập bởi một luồng duy nhất tại một thời điểm.
1.69
Semaphores are provided by almost all RTOS as part of the API
(application program interface) together with the basic task creation &
delay functions.
This lecture will examine:
Why are semaphores useful?
How do they work?
What variants of sempahore can be found in different RTOS – what are the
advantages of different variants
What are the typical problems using semaphores.
When compared with use of critical sections, semaphores are
expensive:
The semaphore operations typically take longer to execute than critical
section entry & exit
The semaphores themselves require (a small amount) of RAM and increase
the size of the RTOS kernel itself
RTOS will usually allow semaphore code to be removed from the RTOS
when it is not needed, saving code space.
1.70
Semaphore introduction
Blocked
waiting Holds S1
S1 token
S1 S2 S3
- Semaphore is a signaling mechanism and a thread waiting on a semaphore can be signaled by another
thread.
- Semaphore is for processes.
- Semaphore is atomic but not singular in nature.
- A binary semaphore can be used as a mutex along with providing feature of signaling amongst
threads.
- Semaphore value can be changed by any process acquiring or releasing the resource.
- Semaphore is an integer variable.
- If locked, a semaphore can be acted upon by different threads.
- A semaphore uses two atomic operations, wait and signal for process synchronization.
- Only one process can acquire binary semaphore at a time but multiple processes can simultaneously
acquire semaphore in case of counting semaphore.
- Semaphore works in kernel space.
- The concept of ownership is absent in semaphore.
- Semaphore can be categorized into counting semaphore and binary semaphore.
- If all resources are being used, the process requesting for resource performs wait () operation and
block itself till semaphore count become greater than one.
What Is Mutex?
In concurrent programming, Mutex is an object in a program that serves as a lock, used to negotiate
mutual exclusion among threads. Mutex is a special case of the Semaphore; it is a mutual exclusion
object that synchronizes access to a resource. A mutex object only allows one thread into a controlled
section, forcing other threads which attempt to gain access to that section to wait until the first thread
has exited from that section.
When a program is started, a mutex is created with a unique name. After this stage, any thread that
needs the resource must lock the mutex from other threads while it is using the resource. The mutex is
set to unlock when the data is no longer needed or the routine is finished.
LowPrio = uxTaskPriorityGet(LPT_Handle);
UART_Printf("\n\rLPT:%d,Acquiring semaphore",LowPrio);
xSemaphoreTake(Sem_A,portMAX_DELAY);
MidPrio = uxTaskPriorityGet(MPT_Handle);
UART_Printf("\n\rIn MPT:%d",MidPrio);
vTaskDelete(MPT_Handle);
}
HighPrio = uxTaskPriorityGet(HPT_Handle);
xSemaphoreTake(Sem_A,portMAX_DELAY);
LED_PORT = LED_My_HPT; /* Led to indicate the execution of My_HPT*/
UART_Printf("\n\rIn HPT:%d, Acquired the semaphore",HighPrio);
HighPrio = uxTaskPriorityGet(HPT_Handle);
UART_Printf("\n\rIn HPT:%d, releasing the semaphore",HighPrio);
xSemaphoreGive(Sem_A);
//start scheduler
vTaskStartScheduler();
//you should never get here
while(1)
{}
//mytasks.c
#include "mytasks.h"
#include <math.h>; #include <stdio.h>; #include <stdlib.h>; #include <string.h>
Mutex is a special type of binary semaphore used for controlling access to the shared resource. It is used to avoid
extended priority inversion using priority inheritance technique.
Priority inheritance can be implemented in two ways : changing the priority of the task trying to access the mutex:
1. to the priority equal to the priority of the task acquiring the mutex (adopted in FreeRTOS)
or
2. to the higher priority than the priority of the task acquiring the mutex
so that the task trying to access the mutex will immediately get the mutex when other task releases the mutex.
#include "FreeRtOSConfig.h" /* Scheduler include files. */
#include "FreeRTOS.h"
#include "task.h"
#include "croutine.h"
#include "semphr.h"
#include "uart.h" // Explore Embedded UART library
static void My_LPT(void* pvParameters); static void My_MPT(void* pvParameters); static void My_HPT(void* pvParameters);
xTaskHandle LPT_Handle; xTaskHandle MPT_Handle; xTaskHandle HPT_Handle;
xSemaphoreHandle xSemaphore = NULL;
#define LED_My_LPT 0x02u //Low/Međium/High Priority Task
#define LED_My_MPT 0x04u
#define LED_My_HPT 0x08u
#define LED_PORT LPC_GPIO2->FIOPIN
int main(void)
{
SystemInit(); /* Initialize the controller */
UART_Init(38400); /* Initialize the Uart module */
LPC_GPIO2->FIODIR = 0xffffffffu;
xSemaphore = xSemaphoreCreateMutex(); /* Create Mutex */
if(xSemaphore != NULL)
{
UART_Printf("\n\r\n\nSemaphore successfully created, Creating low priority task");
xTaskCreate( My_LPT, ( signed char * )"LowTask", configMINIMAL_STACK_SIZE, NULL, 1, &LPT_Handle );
vTaskStartScheduler(); //Run My_LPT (Low Priority Task)
} else
UART_Printf("\n\rFailed to create Semaphore");
while(1); //you should never get here?
return 0;
}
static void My_LPT(void* pvParameters)
{
unsigned char LowPrio;
LowPrio = uxTaskPriorityGet(LPT_Handle);
UART_Printf("\n\rLPT:%d,Acquiring semaphore",LowPrio);
xSemaphoreTake(xSemaphore,portMAX_DELAY);
}
static void My_MPT(void* pvParameters)
{
uint8_t MidPrio;
UART_Printf("\n\rIn MPT:%d",MidPrio);
vTaskDelete(MPT_Handle);
}
HighPrio = uxTaskPriorityGet(HPT_Handle);
xSemaphoreTake(xSemaphore,portMAX_DELAY);
LED_PORT = LED_My_HPT; /* Led to indicate the execution of My_HPT*/
UART_Printf("\n\rIn HPT:%d, Acquired the semaphore",HighPrio);
HighPrio = uxTaskPriorityGet(HPT_Handle);
UART_Printf("\n\rIn HPT:%d, releasing the semaphore",HighPrio);
xSemaphoreGive(xSemaphore);
SemaWait(Semaphore s) SemaSignal(Semaphore s)
{ {
if (s->state == 1) {Critical if ( [ s->waiters is empty ] ) {
s->state = 0; sections s->state = 1; /* give token back */
} else { } else {
[add current task to s->waiters] [wake up highest priority task in s->waiters]
[suspend current task] /* task we have woken up is implicitly given token */
} }
} /* if suspended another task will run */ /* the task woken up may preempt current task and */
} /* run immediately if higher priority than signal task */
1.72
Semaphore Usage
There is no agreement on the names for the two basic Command Command
semaphore operations to acquire to release
The common names are shown in the table – all make sense.
token token
Typical FreeRTOS code shown below – with a TIMEOUT to
detect when tasks never get semaphore Wait Signal
Acquire Release
#include "semaphr.h" Take Give
xSemaphoreHandle s; Pend Post
#define TIMEOUT 100; /* max no of ticks to wait for sema */
Lock Unlock
Task1()
{
/* create the semaphore in just ONE task */
vSemaphoreCreateBinary( s); /* in FreeRTOS semaphore creation is managed via a macro */
for (;;) { Semaphore Wait includes optional timeout
if (xSemaphoreTake(s, TIMEOUT)==pdFALSE) [ handle timeout error ];
/* use shared resource */
if (xSemaphoreGive(s) == pdFALSE) [ handle multiple signal error – should never happen ];
}
}
1.73
Signal & Wait Paradigm
The simplest use for a binary semaphore is unusual, because it does not protect a
resource.
The semaphore acts as an RTOS primitive to synchronise a waiting task
WaitTask with a signalling task SignalTask. The semaphore use is illustrated in the
diagram
For this application the semaphore must be initialised without a token
(state=0). If the RTOS does not allow creation like this an initial call to SemaWait()
immediately after creation (which will not block) will have the effect of changing
semaphore state as required.
0 WaitTask
SignalTask
B
SignalTask() WaitTask()
{ {
for (;;) { for (;;) {
Symbol for a binary
[ get next result ] [ wait on semaphore ]
semaphore, the
[ signal semaphore ] number indicates [do next action ]
} }
initial state
} }
1.74
One to Many Synchronisation Paradigm
WaitTask1
flush 0
SignalTask WaitTask2
B
WaitTask3
1.75
Mutual Exclusive Access Paradigm
AccessTask1
1 Shared
AccessTask2
Resource
B
AccessTask3
1.76
Multiple Resource Exclusive Access
Paradigm
A counting semaphore can be used to control
access to N identical resources.
The initial value is set to the number of resources (here 2)
at any time at most 2 tasks have access to a token and so
can use a resource
Shared
AccessTask1
Resource 1
2
AccessTask2
C
Shared
AccessTask3 Resource 2
1.77
Recursive (nested) semaphore use
void f1(void)
When semaphores control an exclusive
resource the same issue arises as for critical {
sections (slide 1.39). 1: SemaphoreTake( s);
In this code the semaphore operations are f2();
executed in order 1,2,3,4. /* want this still to hold s */
Operation 2 will cause the task to block 4: SemaphoreGive(s);
indefinitely. /* no longer holding s after
Operation 3 will cause the semaphore to be outermost exit */
released when it should not be. }
Some semaphore APIs allow nested use of
semaphore Take & Give operations by a task. void f2(void)
The inner operations (2 & 3) have no affect on {
semaphore state so that the semaphore token
2: SemaphoreTake( s);
is taken for the whole period of the outermost
/* uses shared resource */
section.
SemaphoreGive( s);
This allows freer coding style with semaphores 3:
}
Usually found on specialised "mutex" This code will not work
semaphores with normal semaphores
1.78
void *MallocWord( int n)
{
Fnode *q = NULL; /* return Null if no mem left */
Fnode *p ;
SemaphoreTake(FreeListSema);
p = [ smallest node in freeList with arraysize >= n, if one such exists ]
[ remove p from Freelist ]
SemaphoreGive(FreeListSema);
If ( [ space left for another block in p mem array ] ) {
q = [ new Freenode structure of correct size ]
[ adjust arraysize field of p ]
FreeBlock(q->mem);/*reuse Freeblock() code to insert q into FreeList*/
}
return p->mem; Case Study:
}
Memory Allocation
Void FreeBlock( void *p)
{
Fnode *mcb = [ pointer to Fnode structure containing block p ];
SemaphoreTake(FreeListSema); This is correct, but
mcb->next = FreeList;
is it really a good
FreeList = mcb;
solution?
SemaphoreGive(FreeListSema);
}
When to create semaphores
Timeout Return from wait operation with error indication if blocked for
more than a given time – application task can then recover
from the error. (Zero timeout conventionally used to disable
this feature).
1.82
Lecture 4: Summary
Semaphore Wait/Signal is
Semaphores can be used to: indicated by an arrow
Enforce exclusive access to from/to the semaphore
resource – "Mutual Exclusion"
Synchronise tasks 0 Binary semaphore
Semaphores must be created by initialised to 0
the application before they are B
used
Make sure creation is guaranteed
to be before use regardless of Binary semaphore
1
scheduling initialised to 1
Binary semaphores are most B
basic function. Additional features
that may be added (any
combination) are:
Counting 4 Counting semaphore
initialised to 4
Timeout
C
Flush
Mutex (discussed in lecture on
scheduling)
1.83
Lecture 4: Review Questions
1.84
Lecture 5: Inter-Task & Resource Sharing
Semaphores: Dùng để đồng bộ hóa quyền truy cập vào các tài nguyên dùng chung.
Event Flags: Dùng để đồng bộ hóa các hoạt động cần có sự phối hợp của nhiều task.
Mailboxes, Pipes, Message queues: Dùng để quản lý các thông điệp gửi đi – đến giữa các task.
Signal Events
Signal event được dùng để đồng bộ các task, ví dụ như bắt task phải thực thi tại một sự kiện nào đó
được định sẵn
Ví dụ: Một cái máy giặt có 2 task là Task A điều khiển động cơ, Task B đọc mức nước từ cảm biến nước
đầu vào
- Task A cần phải chờ nước đầy trước khi khởi động động cơ. Việc này có thể thực hiện được bằng cách
sử dụng signal event
- Task A phải chờ signal event từ Task B trước khi khởi động động cơ
- Khi phát hiện nước đã đạt tới mức yêu cầu thì Task B sẽ gửi tín hiệu tới Task A
Với trường hợp này thì task sẽ đợi tín hiệu trước khi thực thi, nó sẽ nằm trong trạng thái là WAITING
cho đến khi signal được set. Ngoài ra ta có thể set 1 hoặc nhiều signal trong bất kỳ các task nào khác.
Ưu điểm của nó là thực hiện nhanh, sử dụng ít RAM hơn so với semaphore và
message queue nhưng có nhược điểm lại chỉ được dùng khi một task nhận được signal.
Message Queue
Message queue là cơ chế cho phép các task có thể kết nối với nhau, nó là một FIFO buffer được định
nghĩa bởi độ dài (số phần tử mà buffer có thể lưu trữ) và kích thước dữ liệu (kích thước của các thành
phần trong buffer). Một ứng dụng tiêu biểu là buffer cho Serial I/O, buffer cho lệnh được gửi tới task
Task có thể ghi vào hằng đợi (queue)
- Task sẽ bị khóa (block) khi gửi dữ liệu tới một message queue đầy
- Task sẽ hết bị khóa (unblock) khi bộ nhớ trong message queue còn trống
- Trường hợp nhiều task mà bị block thì task với mức ưu tiên cao nhất sẽ được unblock trước
Task có thể đọc từ hằng đợi (queue)
- Task sẽ bị block nếu message queue trống
- Task sẽ được unblock nếu có dữ liệu trong message queue.
- Tương tự ghi thì task được unblock dựa trên mức độ ưu tiên
Mail Queue
Giống như message queue nhưng dũ liệu sẽ được truyền dưới dạng khối(memory block) thay vì dạng đơn.
Mỗi memory block thì cần phải cấp phát trước khi đưa dữ liệu vào và giải phóng sau khi đưa dữ liệu ra
Tại mỗi một thời điểm thì chỉ có 1 task có được mutex. Những task khác muốn cùng mutex thì phải
block cho đến khi task cũ thả mutex ra.
Về cơ bản thì Mutex giống như binary semaphore nhưng được sử dụng cho việc loại trừ chứ không
phải đồng bộ. Ngoài ra thì nó bao gồm cơ chế thừa kế mức độ ưu tiên(Priority inheritance
mechanism) để giảm thiểu vấn đề đảo ngược ưu tiên, cơ chế này có thể hiểu đơn giản qua ví dụ sau:
- Task A (low priority) yêu cầu mutex
- Task B (high priority) muốn yêu cầu cùng mutex trên.
- Mức độ ưu tiên của Task A sẽ được đưa tạm về Task B để cho phép Task A được thực thi
- Task A sẽ thả mutex ra, mức độ ưu tiên sẽ được khôi phục lại và cho phép Task B tiếp tục thực thi.
.....
Lecture 5: Data Transfer & Message Queues
“Dogs come when they're called; cats take a message and get back
to you later.”
Mary Bly
1.85
MESSAGE QUEUE
A queue is a FIFO (First In First Out) type buffer where data is written to the end (tail) of the queue and
removed from the front (head) of the queue. It is also possible to write to the front of a queue.
A queue can either hold the data or pointer to the data. In FreeRTOS, the data items are directly copied
to the queue. Each data item is of fixed size. The size of data item and maximum number of data items
are fixed when queue is created.
Queue can also be used as semaphore, mutex, event flag, etc. FreeRTOS does the same. It reduces
memory usage in case of using multiple RTOS services e.g. semaphore and queue in same application.
Operations on queue
1. Create
2. Read
In read operation, the data item is returned. If queue is empty the requesting task will wait for specified
time.If multiple tasks are waiting, then data item will be returned to the either highest priority task or
the one which made the request first depending upon RTOS implementation.For the second case
waiting list is maintained with each queue. In FreeRTOS, queue is implemented in FIRST way.
3. Write(Pend)
In write operation, the data item is directly copied to queue. If queue is full the requesting task will wait
for specified time. For multiple tasks in waite state, process is same as read operation.
What is a Message Queue?
A Message Queue is a dynamically created RTOS object which
allows messages to be sent between tasks.
The queue has a first-in-first-out (FIFO) buffer which can contain any
number of messages from 0 up to a fixed limit.
QueueReceive() – extract and return with the message at the front of
the queue.
If queue is empty block until a message is written
QueueSend() – write a message to the queue
Blocking send – if queue is full block writing process until space is
available for write completion (with optional timeout).
Non-blocking send – return immediately with an error indication if the
write could not complete due to lack of space in queue.
#define MaxQueueSize 3
#define MaxElementsPerQueue 20
xQueueHandle MyQueueHandleId;
int main(void)
{
SystemInit(); /* Initialize the controller */
UART_Init(38400); /* Initialize the Uart module */
LPC_GPIO2->FIODIR = 0xffffffffu;
if(MyQueueHandleId != 0)
{
UART_Printf("\n\rQueue Created");
xTaskCreate( MyTask1, ( signed char * )"Task1", configMINIMAL_STACK_SIZE, NULL, 3, &TaskHandle_1 );
xTaskCreate( MyTask2, ( signed char * )"Task2", configMINIMAL_STACK_SIZE, NULL, 2, &TaskHandle_2 );
vTaskStartScheduler(); /* start the scheduler */
}
else
UART_Printf("\n\rQueue not Created");
while(1);
return 0;
}
static void MyTask1(void* pvParameters)
{
char RxBuffer[MaxElementsPerQueue];
if(pdTRUE == xQueueReceive(MyQueueHandleId,RxBuffer,100))
{
LED_PORT = LED_Task1; /* Led to indicate the execution of Task1*/
UART_Printf("\n\rBack in task1, Received data is:%s",RxBuffer);
} else
{
LED_PORT = LED_Task1; /* Led to indicate the execution of Task1*/
UART_Printf("\n\rBack in task1, No Data received:");
}
vTaskDelete(TaskHandle_1);
}
if(pdTRUE == xQueueSend(MyQueueHandleId,TxBuffer,100))
{
LED_PORT = LED_Task2; /* Led to indicate the execution of Task2*/
UART_Printf("\n\rSuccessfully sent the data");
} else
{
LED_PORT = LED_Task2; /* Led to indicate the execution of Task2*/
UART_Printf("\n\rSending Failed");
}
UART_Printf("\n\rExiting task2");
vTaskDelete(TaskHandle_2);
}
USE OF QUEUE WITH DELAYS
#include "FreeRtOSConfig.h"; #include "FreeRTOS.h"; #include "task.h"
#include "croutine.h"; #include "queue.h"; #include "uart.h" // Explore Embedded UART library
#define MaxQueueSize 3
#define MaxElementsPerQueue 20
static void MyTask1(void* pvParameters); static void MyTask2(void* pvParameters);
xTaskHandle TaskHandle_1; xTaskHandle TaskHandle_2;
xQueueHandle MyQueueHandleId;
#define LED_Task1 0x02u
#define LED_Task2 0x04u
#define LED_PORT LPC_GPIO2->FIOPIN
int main(void)
{
SystemInit(); /* Initialize the controller */
UART_Init(38400); /* Initialize the Uart module */
LPC_GPIO2->FIODIR = 0xffffffffu;
MyQueueHandleId = xQueueCreate(MaxQueueSize,MaxElementsPerQueue); /* Cretae a queue */
if(MyQueueHandleId != 0)
{
UART_Printf("\n\rQueue Created");
xTaskCreate( MyTask1, ( signed char * )"Task1", configMINIMAL_STACK_SIZE, NULL, 3, &TaskHandle_1 );
xTaskCreate( MyTask2, ( signed char * )"Task2", configMINIMAL_STACK_SIZE, NULL, 2, &TaskHandle_2 );
vTaskStartScheduler(); /* start the scheduler */
}
else
UART_Printf("\n\rQueue not Created");
while(1);
return 0;
static void MyTask1(void* pvParameters)
{
char RxBuffer[MaxElementsPerQueue];
//start scheduler
vTaskStartScheduler();
//you should never get here
while(1)
{}
}
Button_LCD_UART Example (continued)
/*mytasks.c*/
#include "mytasks.h";#include <math.h>;#include <stdio.h>;#include <stdlib.h>;#include <string.h>
const char * const pcUsartTaskStartMsg = "USART task started.\r\n";
const char * const pcLCDTaskStartMsg = " LCD task started.";
static xSemaphoreHandle xButtonWakeupSemaphore = NULL;static xSemaphoreHandle xButtonTamperSemaphore = NULL;
static xSemaphoreHandle xButtonUser1Semaphore = NULL;static xSemaphoreHandle xButtonUser2Semaphore = NULL;
xQueueHandle RxQueue, TxQueue;
char stringbuffer[39]; void vLEDFlashTask( void *pvParameters )
{
void vUSARTTask( void *pvParameters ){ portTickType xLastWakeTime;
const portTickType xFrequency = 1000;
xLastWakeTime=xTaskGetTickCount();
portTickType xLastWakeTime;
for( ;; )
const portTickType xFrequency = 50;
{
xLastWakeTime=xTaskGetTickCount(); LEDToggle(5);
char ch; vTaskDelayUntil(&xLastWakeTime,xFrequency);
// Create a queue capable of containing 128 characters. }
RxQueue = xQueueCreate( configCOM0_RX_BUFFER_LENGTH, sizeof( portCHAR ) ); }
TxQueue = xQueueCreate( configCOM0_TX_BUFFER_LENGTH, sizeof( portCHAR ) );
if(( TxQueue == 0 )||( RxQueue == 0 )) uint32_t Usart1GetChar(char *ch){
{
if(xQueueReceive( RxQueue, ch, 0 ) == pdPASS)
{
// Failed to create the queue.
return pdTRUE;
LEDOn(1); LEDOn(3); LEDOn(5);
}
}
return pdFALSE;
USART1PutString(pcUsartTaskStartMsg,strlen( pcUsartTaskStartMsg ));
}
for( ;; ) uint32_t Usart1PutChar(char ch){
{
//Echo back
if(xQueueSend( TxQueue, &;ch, 10 ) == pdPASS )
if (Usart1GetChar(&ch))
{
{
USART_ITConfig(USART1, USART_IT_TXE, ENABLE);
Usart1PutChar(ch);
return pdTRUE;
}
}else{
vTaskDelayUntil(&xLastWakeTime,xFrequency);
return pdFAIL;
}
}
} }
Button_LCD_UART Example (continued)
The rest load is left for the interrupt handler, which responds to interrupt requests and sends bytes from TxQueue or receives and places
them to RxQueue. If we want to send the data to the Queue from an ISR, we have to use the interrupt safe version of these functions.
void USART1_IRQHandler(void)
{
long xHigherPriorityTaskWoken = pdFALSE; /* The xHigherPriorityTaskWoken parameter must be
uint8_t ch; initialized to pdFALSE as it will get set to pdTRUE inside the
//if Receive interrupt interrupt safe API function if a context switch is required. */
if (USART_GetITStatus(USART1, USART_IT_RXNE) != RESET)
{
ch=(uint8_t)USART_ReceiveData(USART1);
xQueueSendToBackFromISR( RxQueue, &ch, &xHigherPriorityTaskWoken );
}
if (USART_GetITStatus(USART1, USART_IT_TXE) != RESET)
{
if( xQueueReceiveFromISR( TxQueue, &ch, xHigherPriorityTaskWoken ) )
{
USART_SendData(USART1, ch);
}else{
//disable Transmit Data Register empty interrupt
/* Pass the xHigherPriorityTaskWoken value into
USART_ITConfig(USART1, USART_IT_TXE, DISABLE);
portEND_SWITCHING_ISR(). If xHigherPriorityTaskWoken was set to
}
pdTRUE inside xSemaphoreGiveFromISR() then calling
}
portEND_SWITCHING_ISR() will request a context switch. If
portEND_SWITCHING_ISR( xHigherPriorityTaskWoken );
xHigherPriorityTaskWoken is still pdFALSE then calling
}
portEND_SWITCHING_ISR() will have no effect */
It is necessary to know that special queue handling functions have to be used inside interrupt handlers, such as xQueueReceiveFromISR and
xQueueSentoBackFromISR.
xQueueSendToFrontFromISR will send the data to the front of the Queue. All the data, which is already available in the queue, will shift
back and next time if we read the Queue, we will get this particular data.
Also note that there is no waiting time here. So if the Queue is full, the function will simply timeout, as in the ISR, we can’t afford to wait for
the space to become available in the Queue.
Button_LCD_UART Example (continued)
/* * usart.c */
#include "usart.h"; #include "mytasks.h"
#define serPUT_STRING_CHAR_DELAY ( 5 / portTICK_RATE_MS )
void Usart1Init(void)
{
GPIO_InitTypeDef GPIO_InitStructure; USART_InitTypeDef USART_InitStructure; USART_ClockInitTypeDef USART_ClockInitStructure;
//enable bus clocks
RCC_APB2PeriphClockCmd(RCC_APB2Periph_USART1 | RCC_APB2Periph_GPIOA | RCC_APB2Periph_AFIO, ENABLE);
//Set USART1 Tx (PA.09) as AF push-pull
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_9; GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AF_PP;
GPIO_InitStructure.GPIO_Speed = GPIO_Speed_50MHz; GPIO_Init(GPIOA, &GPIO_InitStructure);
//Set USART1 Rx (PA.10) as input floating
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_10; GPIO_InitStructure.GPIO_Mode = GPIO_Mode_IN_FLOATING;
GPIO_Init(GPIOA, &GPIO_InitStructure);
//configure NVIC
NVIC_InitTypeDef NVIC_InitStructure;
//select NVIC channel to configure
NVIC_InitStructure.NVIC_IRQChannel = USART1_IRQn;
//set priority to lowest
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0x0F;
//set subpriority to lowest
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0x0F;
//enable IRQ channel
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
//update NVIC registers
NVIC_Init(&NVIC_InitStructure);
//disable Transmit Data Register empty interrupt
USART_ITConfig(USART1, USART_IT_TXE, DISABLE);
//enable Receive Data register not empty interrupt
USART_ITConfig(USART1, USART_IT_RXNE, ENABLE);
}
Button_LCD_UART Example (continued)
uint32_t Usart1PutChar(char ch)
{
if( xQueueSend( TxQueue, &ch, 10 ) == pdPASS )
{
USART_ITConfig(USART1, USART_IT_TXE, ENABLE);
return pdTRUE;
}else{
return pdFAIL;
}
}
void USART1PutString( const char * const pcString, unsigned long ulStringLength)
{
unsigned long ul;
for( ul = 0; ul < ulStringLength; ul++ )
{
if( xQueueSend( TxQueue, &( pcString[ ul ] ), serPUT_STRING_CHAR_DELAY ) != pdPASS )
{
/* Cannot fit any more in the queue. Try turning the Tx on to clear some space. */
USART_ITConfig( USART1, USART_IT_TXE, ENABLE );
vTaskDelay( serPUT_STRING_CHAR_DELAY );
/* Go back and try again. */
continue;
}
}
USART_ITConfig( USART1, USART_IT_TXE, ENABLE );
}
Message delivered
msgsmsgs-1
1.87
Message-queue data: to copy or not to copy
1.88
Interlocked one-way communication
Queue
length 1
Special case of a message
queue length 1 is sometimes Task1 Task2
called a Mailbox
Task1 sends a message to the
queue and then waits on the 0
semaphore.
B
Task 2 will receive the
message, process it, and only
then signal the semaphore. Blocking send to a message
Because the two tasks stay in queue length 1 will also provide
interlocked communication,
lock-step Task1 knows it can without need for a semaphore.
over-write buffer storage used
for the sent message any time Can you see why if the queue
after the semaphore wait ends. contains pointers to message
data the semaphore version
might be preferred?
1.89
Interlocked one-way communication (2)
Pseudocode for this problem,
using a queue of pointers to
message data Task1()
It does not matter whether queue {
send is blocking or non-blocking [ allocate buffer to store message ]
since semaphore ensures it will for (;;) {
never fail [ write next message into buffer ]
[ send pointer to buffer to queue q1]
Note message is stored in buffer [ wait on sema s1 ]
in Task1, no need for another }
buffer in Task2
Startup() Task2()
{ {
[ Create queue q1, size = 1 pointer, for (;;) {
length = 1 ] [ Receive pointer to
[ Create binary semaphore s1, init 0 ] data from q1 ]
[ Create tasks Task1 & Task2 ] [ process data ]
[ start scheduler ] [ signal to sema s1 ];
} }
1.90
Interlocked one-way communication: (3)
details under FreeRTOS
void Task1(void *p) void Task2( void *p)
{ {
/* buffer for data */ char *pcx; /* pointer to data */
char *pcBuffer = malloc(16); for (;;) {
for (;;) { xQueueReceive(q1,&pcx,0)
[ put new data into pcBuffer ] [ process data using pcx ]
xQueueSend(q1, &pcBuffer,0); xSemaphoreGive(s1);
xSemaphoreTake(s1,0); }
}
pcx:
pcBuffer:
1.91
Interlocked two-way communication
1.92
Interlocked two-way communication
Task1()
Each task will block waiting {
for the other so both tasks run for (;;) {
Each task can only process 1:[ generate next message ]
2:[ send message to queue q1 ]
messages when the other is
7:[ receive reply from queue s2 ]
blocked, so this interlocked }
communication will be slower
than non-interlocked
By making either q1 or q2 of
length 2 it is possible to Task2()
{
speed up the system, by
for (;;) {
sending one message in 3:[ receive message from q1 ]
advance of the reply received 4:[ process message ]
so that one item is always 5:[ generate reply ]
buffered in the queue. 6:[ write reply to q2 ];
Why must Task1 also be }
changed to enable this?
1.93
Non-interlocked one-way communication
Queue
length N
Task1 Task2
1.94
Queue Features
Pointers only, or fixed size Blocking or non-blocking Send?
messages? If non-blocking Send the queue has one list
FreeRTOS allows any fixed-size of blocked tasks waiting to receive a
message. message
Queue must specify message length on If blocking send the queue also has a list of
creation tasks blocked waiting to send.
Send & Receive from queue uses
pointers to message storage – the Timeouts allowed?
specified number of bytes is copied Any blocking operation may have optional
to/from the queue timeout
If queue itself contains pointers, Send &
receive functions must contain pointers Tasks woken on FIFO or priority basis?
to pointers! Priority scheduling means that normally if a
Message order list of blocked tasks waits for an event, the
task woken when the event arrives is the
First-in First-out (FIFO) – the normal HPT.
method
This can result in high priority tasks hogging
Last-in first-out (LIFO). Effectively, by all traffic and starving lower priority tasks.
sending a message LIFO you are Alternative (less common) gives messages
making it the first message to be read & to tasks on a strict first-come first-served
therefore this is good for high priority basis.
messages. Note however that later
LIFO messages will displace earlier Queue Broadcast/Flush?
ones at the head of the queue. Wake up all tasks waiting to receive a
message with a single transmitted
message.
1.95
Lecture 5: Summary
1.96
Lecture 5: Review Questions
1.97
Lecture 6: Synchronisation
“If you don't get all the clocks synchronized when the leap
second occurs -- you could have potentially interesting effects.
The Internet could stop working, cell phones could go out.”
Geoff Chester
Barrier synchronisation
Definition
Solutions
Synchronisation Objects
Event Registers & Event Flags
1.101
Barrier Synchronisation
This is the typical activity 1. When each task arrives at
synchronisation problem where any point A in the barrier, it must post
number of tasks need to be its arrival to the other tasks, and
temporally aligned so that they all wait
execute specific sections of code 2. When all tasks have arrived at
starting at the same time. A, all tasks are allowed to
Found where tasks need to cooperate proceed from point B in the
in the solution of a problem barrier.
In this case none can start until it is The next few slides look at
known that all are ready to start
solutions
Barrier
A B
Task1
A B
Task2
A B
Task3
A B
Task4
1.102
Solution 1: Helper Task & Semaphores
The first solution conceptually uses a helper task to count the number of tasks
that have arrived.
Each task signals to a counting semaphore SemA on arrival, and then waits on another
binary semaphore SemB. The helper task loops waiting on the semaphore SemA,
incrementing a private count variable every time the semaphore is signalled.
When the helper task has counted the correct number of SemA signals it exits the loop
and signals to SemB.
All tasks must be woken, so if a flush operation on SemB is available it should be used.
Otherwise a loop is necessary to wake up all the tasks through repeated signals.
The helper task must be higher priority than that the signalling tasks if SemA is binary –
otherwise counts may be lost.
Better solution, shown here, is for SemA to be counting semaphore.
Still good idea to have Helper task high priority
Task1 SemA SemB Task1
0 Helper flush 0
Task2 Task2
Task
C B
Task3 Task3
1.103
Rather than have a separate helper task, the highest priority of the
synchronising tasks (TaskH below) can serve as helper while it is waiting
at the barrier. If SemA is a counting semaphore it does not matter if
other tasks arrive first – the semaphore signals will be remembered.
The two semaphores must be created somewhere before the barrier code
executes.
#define NSYNCH 3 /* number of tasks to
synchronise */ TaskX()
TaskH()
/*all other tasks to synchronise*/
/* highest priority task to synchronise */
{ {
……
/* barrier point A */ /* barrier point A */
for (count=0; count <NSYNCH-1; count++) { [ Signal to SemA ]
[ wait on SemA ] [ Wait on semB ]
} /* barrier point B */
for count=0; count < NSYNCH-1; count++) {
[signal to SemB ]
}
}
/* barrier point B */
……
1.104
Alternative solutions
Task1 0
Sem1
B
Task2 0
Sem2
B
1.106
Event Flag Registers
1.108
Cf Event Flags & Semaphore
1.109
Solution 2: Mutex Variable & Event Flag
1.110
Solution 3: Multiple event flags
Select
Flag n Set
Task( int n)
{ Flag
/* barrier point A */
EventFlagChange(FlagsB, (1<<n), 0xFF); Wait on all
EventFlagWaitAll(FlagsB,0xFF, 0xFF); flags 7..0 set
/* barrier point B */
/* still not easy to ensure that all tasks are waiting
before flags are reset */
}
Task( int n)
{
char flagValue=0x00; /* either 0x00 or 0xFF */
for (;;) {
/* barrier point A */
flagValue = flagValue ^ 0xFF; /* invert value of flag
EventFlagChange(FlagsB, (1<<n), flagValue); /*mark this task at A*/
EventFlagWaitAll(FlagsB,0xFF, flagValue);/*wait till all tasks at A*/
/* barrier point B */
}
}
/*Assume 8 tasks created with n = 7..0*/
1.112
Lecture 6 Summary
1.113
Conclusion:
Semaphore is a better option in case there are multiple instances of resources available. In the
case of single shared resource mutex is a better choice.
Lecture 7: Scheduling Theory
1.114
Scheduling
1.115
When to schedule?
All RTOS must schedule when the current running task calls a
blocking API function.
Otherwise the system would halt!
Other than this, we will examine three commonly used choices for
when to schedule:
Non-preemptive (coroutine) scheduling
All scheduling is explicitly allowed by the running task
At other times preemption is not allowed
Preemptive scheduling
The RTOS scheduler is called whenever something happens which may
change the scheduling decision:
Task priority changing
Task changing state
Preemptive scheduling with time-slicing
The scheduler is called from system clock tick even when no task has
changed state
1.116
Co-routine scheduling
1.117
Co-routine scheduling (cont'd)
The advantages on the previous slide mean that co-routine based systems
are very compact and efficient.
There are however big disadvantages which mean that most RTOS (and
nearly all large RTOS) use preemptive scheduling.
Disadvantages
All application code must be written correctly – TaskYield() must be called
within any loop that may last a long time.
A single "rogue" task will freeze the entire system
It is possible (but complex) to require compilers automatically to insert TaskYield()
within every loop.
Task-level response time is not easily guaranteed, since it depends on
maximum length of time between TaskYield() calls.
Calls to TaskYield() in inner loops can slow down execution
Note that co-routines and tasks can co-exist within one system – so
getting the advantages of both at the cost of some complexity
FreeRTOS allows mixed co-routines and tasks
Co-routines run within a single task
1.118
Preemptive scheduling
1.119
Time-slice (round-robin) Scheduling
This is a small addition to preemptive deadline
scheduling. Sometimes it is useful to Task1
have tasks which share execution time.
This can be implemented in a time- Task2
slicing system by giving tasks the same Priority scheduling
priority. The scheduler will allocate to
each task a time-slice before switching finish
to the next task.
Surprisingly, this feature is not usually
what is needed in a real-time system. Task1
Given two tasks of equal priority (same Task2
deadline):
All that matters is do they both meet the Time-slice scheduling
deadline
This is no more likely if they time-slice
Time-slice scheduling
In fact slightly less likely since the
switching consumes CPU time is fairer but overall
If one task finishes first it will release worse for finish times
system resources which may help other
tasks.
1.120
Which task to schedule?
1.121
Deadlines
1.122
Job model for tasks in an RTOS
Fixed time between
events: Ti
Task running Task running
Deadline =
Fixed CPU
Next event
execution time: Ci
1.123
Earliest Deadline First (EDF) Scheduling
Fixed Priority scheduling is not optimal for meeting deadlines.
If deadlines are known in advance then in principle EDF scheduling
will meet deadlines if any scheduling strategy can do this.
At any time schedule the READY task whose deadline will happen first
Not difficult to prove that this is optimal (if tasks block once per deadline)
In practice not often used
Very time-consuming to implement as number of tasks increases
Difficult to get information on when future deadlines will happen
Task 1 runs because
of earlier deadline
READY Deadline
Task 1
Task 2
Task Task
Task 2 runs because
ready running
of earlier deadline
1.124
Round-robin scheduling
Feature can be added to priority scheduling.
Allow tasks to have equal priority.
Run set of equal priority tasks in equal time-slices and strict rotation
When a task blocks before the end of its time-slice start the next one early
As pointed out earlier this strategy is not usually good for RTOS where
early completion is more important than fairness.
Round-robin scheduling simple. It has the merit that READY tasks are
guaranteed to be given a time-slice within a given time.
blocked
Why is it not good to
switch round-robin to
Task1 a new task every
Task2 time-slice?
Task3
1.125
Rate Monotonic Analysis
Suppose all N tasks in an RTOS follow the job model (slide 1.123)
Task i executes with CPU time Ci
Task i waits on event with period Ti
Task i has no other blocking (ignore synchronisation with other tasks)
Schedule task i with fixed priority Pi so that faster deadlines have
higher priority
Ti < Tj Pi > Pj
Then the system is guaranteed to meet all deadlines providing the
total CPU Utilisation U is less than the RMA limit for N tasks U(N)
U = Ui = (Ci/Ti) < U(N) = N(21/N-1)
Note that variable times are allowed as long as they obey the given
inequalities
1.126
Example
Consider 3 tasks as in the table
The priorities must be assigned as shown, inversely with period.
The total CPU utilisation U is 0.767
The RMA limit is
U(3) = 3(21/3-1) = 0.780
Therefore the system meets the RMA limit and is guaranteed to
meet all deadlines.
Task T C U Priority
1.127
RMA discussion
1.128
Extended RMA
We can include the effect of tasks blocking due to inter-task mutual-
exclusion etc.
Suppose the maximum time Task i can block before it completes is Bi
Replace Ci by (Ci+Bi) in the RMA limit calculation
((Ci+Bi) /Ti) < U(N) = N(21/N-1)
NB - blocking is not the same as waiting to run at a lower priority, which is
included in RMA limit. Therefore Bi is the sum of all blocking on lower priority
taks, and blocking on higher priority tasks only when they are themselves
blocked.
This can be helpful where an upper bound can be put on blocking through
mutual exclusion
Where no such upper bound exists the system is unsafe and should not
be used!
1.129
Estimating blocking Bi
Suppose Task i blocks due to access to a shared resource governed by
semaphore S, and has no other blocking while it executes. Assume
During each computation Ci the task i claims S at most once.
The maximum time (critical section length) for which any task j claims S is Kj.
NB we do not consider priority inversion here – which can increase this maximum time to longer
than expected – see Lecture 9
No task can claim S more than once while a given task is waiting
Worst case, when waiting on semaphore S, Task i may have to block while one
task of lower priority claims the semaphore: Bi max( K j | j i)
Tasks of higher priority claiming semaphore do not count as blocking, since they
would run anyway and having claimed semaphore cannot block
1.131
Lecture 7: Summary
1.132
Lectures 8 & 9: Liveness Problems in Real-
Time Systems
Never discourage anyone...who continually makes progress, no
matter how slow.
Plato
If debugging is the art of removing bugs, then programming
must be the art of inserting them.
Unknown
1.134
Deadlock
1.135
A simple example
T1()
Consider two tasks T1 & T2 which are
part of a concurrent system and shared {
resources A and B. The resources could
be shared memory, hardware, etc. A1: [ acquire Sa ]
Each resource is protected by a B1: [ acquire Sb ]
semaphore (Sa and Sb) C1: [ perform computation using A and B ]
To perform part of the computation each D1: [ release Sb ]
task needs to use both A and B. E1: [ release Sa ]
Can you see what is wrong with the }
code?
Mutual exclusive use of A & B is T2()
guaranteed
{
Each task claims and then releases each
semaphore once, as it should. A2: [ acquire Sb ]
The two tasks can deadlock! B2: [ acquire Sa ]
A1,A2,B2( T2 blocks on Sa),B1(T1 blocks C2: [ perform computation using A and B ]
on Sb)…… D2: [ release Sa ]
E2: [ release Sb ]
}
1.136
Discussion
This example reveals some interesting
features of deadlock wants
Attaining the deadlocked state A T2
depends on scheduling.
In this case one of the tasks must acquire holds
its first semaphore during the (short) time
between when the second task acqires its holds
first & second semaphore
T1 wants
The deadlock, once achieved, is non- B
recoverable.
We will consider later more complex
systems which do allow recovery,
providing the resource is preemptible. Resource graph
The deadlock is caused by a cyclic illustrates cyclic
dependence between tasks, each of dependence
which wants a resource held by
another task
Cycle can include N tasks & N resources
(N2)
1.137
The Classic problem
1.138
Conditions for Deadlock
Non-preemptability
Resource once allocated can't be released until task has finished.
Don't confuse this with task preemption
Exclusion
Resource can't be held simultaneously by two tasks
Hold-and-Wait
Holder blocks awaiting the next resource
Circular waiting
tasks acquire resources in different orders
1.139
Strategies
1.140
Deadlock Detection & Recovery
1.141
Deadlock Avoidance
1.142
Deadlock Prevention
Best technique for small to medium-sized
systems.
Simple & robust T1 T2
No run-time cost A,B,D C,B
Disadvantage: relies on conservative
resource use so may make systems
slower than is possible.
Establish a global allocation order on
resources such that when resources A,B
are used at the same time by a given T3 T4
task, either A<B or B<A. E,F A,F
Constrain all tasks so that resources are
acquired (SemaphoreTake() etc)
according to this order.
It does not matter in what order resources
are released.
1.143
Using Timeouts
There are cases where some known error condition (other than
deadlock) can be detected using a timeout.
In this case the timeout is part of normal operation
More often long delays in obtaining a resource are not expected
A timeout indicates an error condition
Defensive programming
Use long timeouts – should never happen
Stop system with error on timeout.
Don't rely on this to detect deadlock conditions
Deadlock may be possible but never happen due to scheduling
Such a system is unsafe and may deadlock in the future as the result
of any small change
Make sure deadlock is prevented by design, as on previous slide.
1.144
Starvation
Symptoms
One (or more) tasks wait indefinitely on a shared resource held
by other, normally running, tasks.
Cause
Two or more other tasks are using the resource in turn, without
break, denying the starved task usage.
Total resource utilisation (analogous to CPU utilisation) is
100%
If higher priority tasks have 100% CPU utilisation so preventing
execution of lower priority task this is special case of starvation.
Starvation depends in general on details of scheduling,
it is trickier to diagnose than deadlock
In starvation, a task is prevented from execution by
other tasks, which themselves are executing normally
1.145
Starving Philosophers
1.147
Livelock - a system designer's nightmare
1.148
Pseudo-Livelock
For example, Dining Philosophers are all doing "pickup left fork;
busy/wait loop until right fork is free" and pick up left forks at
same time.
This is really a hidden form of the classic deadlock - the busy/wait
polling loops make it seem that something is happening, when really
the tasks are all waiting on resources.
In this case, as in deadlock, the pseudo-livelock can't be broken by
any scheduling order.
More interestingly, livelock may be dependent on scheduling, so
that even after it occurs it could be broken by a different execution
order.
See next slide
1.149
Real Livelock
1.150
Priority Inversion
A high priority task DiskDriver() shares a resource with a low priority task KbdManager()
using a semaphore S.
Assume no other READY tasks in the system
The resource is locked for a short time T by either task
When using S DiskDriver() must wait worst case for up to T while Keyboardmanager() finishes
using S (priority inversion).
Now suppose there is another task in the system Task2() which has priority just greater
than KbdManager(). This can preempt KbdManager() while it holds S
Effectively DiskDriver() is reduced to the priority of KbdManager() because Task2() runs
in preference to it.
The period of priority inversion is now determined by Task2() & effectively unbounded.
Priority inversion
Running
DiskDriver()
Ready
Task2() Blocked
Waiting S
Kbdmanager() Holds S
1.151
Priority Inheritance Protocol (PIP)
The solutions to this problem all use dynamic priorities. The idea is that
KbdManager() should have its priority temporarily increased.
Priority Inheritance Protocol (PIP)
Any task T which claims the semaphore S has priority dynamically increased to
that of the highest priority task waiting on S
We say it inherits priority of this task.
Priority is increased to ensure this whenever a higher priority task waits on S
When T releases S it will drop back to its old priority
Priority inheritance is transitive.
In PIP, a task can be blocked by a lower priority task in two ways
Direct blocking – when LPT has resourced needed by task
Push-through blocking - when task is prevented from executing due to LPT
having inherited priority higher than task we call this
PIP limits max blocking to at most one critical section for each semaphore,
and also at most one critical section for each lower priority task
Schedulability (RMA etc) is determined by total blocking
1.152
Ceiling Priority Protocol (CPP)
1.153
Priority Ceiling Protocol (PCP)
1.154
PCP example
1.155
PCP Analysis
1.156
Blocking under PCP
1.157
What Really happened to Pathfinder on
Mars?
1.158
Lectures 8 & 9: Summary
1.159