You are on page 1of 8

real-time systems for embedded microcontrollers

part I

Paolo Gai Evidence Srl http://www.evidence.eu.com

embedded systems typical features

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

software used in automotive systems


The software in powertrain systems boot and microcontroller related features real-time operating system
provides abstractions (for example: task, semaphores, ) an interaction model between hardware and application separates behavior from infrastructures debugging simplification

typical microcontroller features


let's try to highlight a typical scenario that applies to embedded platforms embedded microcontroller
depending on the project, that microcontroller will be @ 8, 16, or 32 bit typically comes with a rich set of interfaces
timers (counters / CAPCOM / Watchdog / PWM) A/D and D/A communication interfaces (I2C, RS232, CAN, Infrared, ...) ~50 interrupts (the original PC interrupt controller had only 15!!!)

I/O Libraries
completes the OS with the support of the platform HW 10 times bigger than a minimal OS

application
implements only the behavior and not the infrastructures (libraries) independent from the underlying hardware

memory
SRAM / FLASH / ...

other custom HW / power circuits

the operating system is a key element in the architecture of complex embedded systems
Evidence Srl - info@evidence.eu.com 2005 Evidence Srl - info@evidence.eu.com 2005

Tricore
TriCore CPU with 4 stage pipeline 32-bit Peripheral Control Processor (PCP2) 2 MByte embedded program flash with ECC, 128 KByte data flash for scalable 16 KByte EEPROM emulation, 192 KByte on-chip SRAM, 16 KByte instruction cache 16-channel DMA controller Sophisticated interrupt system with 2 x 255 hardware priority arbitration levels serviced by CPU and PCP2 Two general purpose timer array modules plus separate LTC array (GPTA4) Two asynchronous/synchronous serial channels (ASC) Two high speed synchronous serial channels (SSC) Two high-speed Micro Link Interfaces for serial inter-processor communication (MLI) MultiCAN module with four CAN nodes incl. TTCAN functionality 4-channel fast analog-to-digital converter unit (FADC) Two 16-channel analog-to-digital converter units (ADC) with 8/10/12-bit resolution 44 analog input lines for ADC and FADC 123 digital general purpose I/O lines, 4 input lines Evidence Srl - info@evidence.eu.com 2005

RAM vs ROM usage


consider a mass production market: ~ few M boards sold development cost impacts around 10% techniques for optimizing silicon space on chip
you can spend a few men-months to reduce the footprint of the application

memory in a typical SoC


512 Kb Flash, 16 Kb RAM

sample SoC (speech process. chip for security apps) picture


68HC11 micro 12Kb ROM 512 bytes RAM in approx. the same space (24x cost!)
Sample die of a speech-processing chip
Evidence Srl - info@evidence.eu.com 2005

typical footprints

part II
code size 1000kb 100kb 10kb OSEK/VDX 1kb POSIX PSE54 (Linux, FreeBSD) POSIX PSE51/52 ITRON threadX ERIKA tinyOS Linux real-time SHaRK eCOS

VXworks

scheduling algorithms for small embedded systems

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

sharing the stack


the goal of our design is to produce a system that can save as much RAM memory as possible RAM is used for
storing application data storing thread stacks

sharing the stack (2)


in general, the stack can be shared every time we can guarantee that two tasks will not be interleaved T1 T1 T2 interleaved execution T2 T3 not interleaved execution

a good idea would be to try to reduce as much as possible stack usage, sharing the stack stack space among different threads. Now the question is:

stack sharing under fixed priority scheduling


tasks have the same priority tasks do NOT block (no shared resources)

When does the stack can be shared among different tasks?


Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

an example
suppose to have a system
that schedules tasks using fixed priorities where each task do not block

using resources...
the model where the different tasks do not interact is not realistic we would like to let the different tasks
share some resources still maintaining some timing properties (e.g., meet deadlines) and, if possible, minimize the stack space (RAM) needed

suppose to have 3 different scheduling priorities suppose that


priority 1 (lowest) has three tasks with stack usage 7, 8, 15 priority 2 (medium) has two tasks with stack usage 10 and 3 priority 3 (highest) has a task with stack usage 1

the first problem that must be addressed is the Priority Inversion problem

the total stack usage will be


max(7,8,15)+max(10,3)+max(1) = 26 whereas the sum of all the stacks is 44

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

priority inversion
suppose to have 2 tasks that share a resource the High Priority task can be delayed because of some low priority task
normal execution critical section

priority inheritance
first Solution (Priority Inheritance/Ceiling): the low priority task inherits the priority of T1
note that the execution of T1 and T3 are interleaved!

Deadline miss!!!
S

normal execution critical section

Push-Through Blocking
S

T1 T2 T3
W

T1 T2 T3
W

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

can we share the stack?


sharing stack space means that two task instances can use the same stack memory area in different time instants in normal preemptive fixed priority schedulers, tasks cannot share stack space
because of blocking primitives recalling the PI example showed before, T1 and T3 cannot share the same stack space at the same time
W S

yes!
stack can be shared also when mutual exclusion between shared resources have to be guaranteed the idea is that a task can start only when all the resources it needs are free this idea leads to two protocols
Stack Resource Policy (EDF-based) Immediate Priority Ceiling (Fixed Priority-based)

T1 T2 T3
W

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

stack resource policy


second solution (Stack Resource Policy, SRP): a task is allowed to execute when there are enough free resources T1 and T3 are NOT Interleaved!
normal execution critical section

stack resource policy (2)


proposed by [Baker, 91] according to the SRP protocol, every hard (periodic and sporadic) task is assigned
a priority pi and a static preemption level i

Delayed execution

T1 T2 T3
W S

W S

such that the following essential property holds: task i s not allowed to preempt task j, unless i> j under EDF and RateMonotonic, the previous property is verified if periodic task i is assigned the following preemption level:

Ti is the task period

1 Ti

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

stack resource policy (3)


in addition, every resource rk is assigned a static ceiling defined as

stack resource policy (4)


then, the SRP scheduling rule states that

cei l r k

max i

needs r k

in the case of multi unit resources, the ceiling of each resource is dynamic as it depends on the number of units actually free

a job is not allowed to start executing until its priority is the highest among the active jobs, and its preemption level is greater than the system ceiling

a dynamic system ceiling is defined as


s

max ceil r k Rk iscurr entlybusy

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

stack resource policy (5)


once a thread is started,
it will never block until completion it can only be preempted by higher priority threads

stack resource policy (6)


tasks can share a single user-level stack

SRP prevents deadlocks


threads never block

the maximum blocking time Bi of tasks is bounded


Bi=max length of the critical sections of lowest priority tasks

a set of periodic tasks can be scheduled under EDF+SRP if


i

T1 T2 T3
SRP stack usage

without SRP

i ,1 i n ,
k

Ck 1 Tk

Bi 1 Ti

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

immediate priority ceiling - HLP


corresponds to using SRP on a fixed priority scheduler priority = preemption level things can be handled in an efficient way, because in that case
locking a resource raises the task priority system ceiling can track both busy resource ceilings and priorities

implementation tips
how can two threads share the same stack space?
Task x() { int local; initialization(); for (;;) { do_instance(); end_instance(); } }

the tradictional thread model allows a task to block forces a task structure

in general, all tasks can preempt each other


also, tasks can block on semaphores

a stack is needed for each task that can be preempted the overall requirement for stack space is the sum of all task requirements, plus interrupt frames
Evidence Srl - info@evidence.eu.com 2005 Evidence Srl - info@evidence.eu.com 2005

methods for stack sharing


there are two methods that allows stack sharing kernel-supported stack sharing
the kernel directly support stack sharing providing a Monostack Hardware Abstraction Layer (HAL)

kernel-supported stack sharing


the kernel really manages only a single stack that is shared by ALL the tasks
also interrupts use the same stack

kernel must ensure that tasks never block


it would produce interleaving between tasks, that is not supported since there is only one stack

user-supported stack sharing


the kernel supports a tradictional Multistack HAL the user can implement stack sharing by grouping tasks at application level

User Stack T1 T2 T3

T1 T2 T3

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

one shot model


to share the stack the one shot task model is needed

user-supported stack sharing


the kernel supports only the concept of multiple threads, each one using a different stack and a different CPU context
e.g., the POSIX thread library

int local; Task x() { int local; initialization(); for (;;) { do_instance(); end_instance(); } } Task x() { do_instance(); } System_initialization() { initialization(); ... }

the user have to map application tasks into the kernel threads

Task a Task b thread 1


Evidence Srl - info@evidence.eu.com 2005

Task c

Task d

Task f

Task e thread 2 thread 3

Evidence Srl - info@evidence.eu.com 2005

user-supported stack sharing (2)


the idea is that each task is mapped inside the thread body

multistack HAL
support multiple stacks each stack is used by one (or more than one) task kernel must ensure that interleaving can only appear between (top-level) tasks on different stacks
POSIX threads uses a multistack paradigm (they can block)

Task x(void *arg) { int local; initialization(); for (;;) { while (Ihavetodosomework()) { if (Task1_active()) do_Task1(); if (Task2_active()) do_Task2(); ... } end_instance(); } }

stack 1 stack 2 T1 T2 T3

T2 T3

T1

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

is there a limit?
we are able to let tasks share the same stack space
but only between tasks of the same priority

preemption thresholds
(technique first introduced by Express Logic inside the ThreadX kernel; further studied by Saksena and Wang])

can we do better? the limit for stack reduction is to schedule all the tasks using a non-preemptive algorithm
only one stack is needed not all the systems can afford that

derived from Fixed priority scheduling two priorities


ready priority used for queuing ready tasks dispatch priority used for the preemption test ready priority <= dispatch priority

the dispatch priority is also called threshold

the idea is to limit preemptability without impacting on the schedulability of the system using Preemption Thresholds

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

disabling preemption
preemption thresholds are used to disable preemption between tasks

another interpretation of preemption thresholds


consider a system that uses fixed priorities with immediate priority ceiling consider the task set let each two tasks that are mutually non-preemptive share a pseudo-resource the pseudo resource is automatically
locked when the task starts unlocked when the task ends

Higher priorities

dispatch Priority (B) dispatch Priority (A)

Task B
ready Priority (B)

Task A
ready Priority (A)

ready priority dispatch priority

these tasks cannot preempt each other!


Evidence Srl - info@evidence.eu.com 2005

task's priority max(ceiling of a pseudo-resource used by the task) preemption thresholds = traditional fixed priorities when ready priority = dispatch priority
Evidence Srl - info@evidence.eu.com 2005

preemption thresholds and EDF


preemption thresholds under EDF can be thought as a straightforward extensions of EDF+SRP each task
is scheduled using EDF+SRP is assigned some SRP pseudo-resources that is automatically locked/unlocked

why disabling preemption?


preemption is usually used to enhance response time the objective is to disable the preemption maintaining the timing constraints of the system

Why?
reducing the preemption let more tasks share the same stack it is important not to reduce the preemption too much
a non-preemptive system is easily non schedulable

ready priority dispatch priority

preemption level of each EDF task max(ceiling of a pseudo-resource used by the task) EDF+pr.thresholds has the same behavior of traditional EDF+SRP when dispatch priority = 0
Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

enhancing schedulability
premption thresholds have the nice property to enhance schedulability Example [Saksena, Wang,99] :
three periodic tasks with relative deadlines
Task T1 T2 T3 Ci 20 20 35 Ti 70 80 200 Di 50 80 100 ready priority 3 2 1 response time preemptive non-preemptive 20 55 40 75 115 75

minimizing stack space


preemption thresholds are used to reduce stack space the idea is to selectively reduce the preemption between tasks, to let tasks share their stack the approach is done in three steps

the system is NOT schedulable with fixed priorities or nonpreemptive scheduling


Task T1 T2 T3 ready priority dispatch priority response time 3 3 40 2 3 75 1 2 95

1) search for a schedulable solution 2) threshold computation 3) grouping and stack minimization

but is schedulable using preemption thresholds (T1,T2) and (T2,T3) are mutually non preemptive tasks

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

search for a schedulable solution


the staring point is a set of tasks with requirements that comes out from the application domain
periodicity, relative deadline, jitter

threshold computation
the schedulable solution found at the previous step consists in a ready and a dispatch priority value for each task observation: raising a dispatch priority
helps stack sharing (tasks easily become mutually non-preemptive) makes feasibility harder (the system tends more to non-preemptive)

this step should produce a feasible priority assignment composed by ready and dispatch priority for each task

fixed priorities
traditional methods
Rate Monotonic Deadline Monotonic

EDF
EDF + SRP assignment is typically a good choice

the objective of this phase is to reduce unnecessary preemptability inserted by the values of the scheduling attributes
algorithm proposed by [Saksena, Wang, 00]

others [Saksena, Wang, 99]


greedy algorithms simulated annealing
Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

threshold computation (2)


main idea: raise the dispatch priority as much as we can, maintaining schedulability 1. 2. start from the highest priority task raise its dispatch priority until
it is equal to the maximum priority the system is not schedulable

grouping and stack minimization


once the dispatch priority values have been maximized, we obtain a system that have just the needed (minimum) preemptiveness then, we only have to compute which is the maximum stack required by a given configuration, eventually with the depending on the kernel support we are using, we can be interested in:
knowing the maximum stack usage of a given configuration (kernel supported stack sharing) computing a set of non preemption groups (user supported stack sharing)

3. 4.

consider the next task go to step 2

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

computing the maximum stack usage


if the only interesting thing is the maximum stack usage of a given configuration, then there exist a polynomial algorithm that finds it the algorithm is essentially a directed acyclic graph longest path search along the preemption graph with stack sizes as weights.

computing the maximum stack usage (2)


1. for each task ti 2. worst[ti] = stack[ti]; 3. for each task ti h2l 4. for each task tj that can preempt ti h2l 5. worst[ti] = max( worst[ti], stack[ti]+worst[tj]); 6. the_worst = max(for each ti, worst[ti]); (Note: h2l means from highest to lowest priority) [T. W. Carley, private e-mail]

Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

grouping tasks
8 7 Priorities 6 5 4 3 2 1

an example
8 Priorities dispatch priority 7 6 5 4 3 2 1 Stack
1 100 1 3 100 101 1 2 1 2 1 1 1 1

ready priority

Total 102

Task

Worse 102 102 Task 8 7


Evidence Srl - info@evidence.eu.com 2005

Evidence Srl - info@evidence.eu.com 2005

You might also like