You are on page 1of 77

REAL TIME CONCEPTS

Real Time Definitions

• Real time systems are those in which timely is as important as the


correctness of the outputs.
• Real time systems do not have to be fast systems
Or
• Any system where a timely response by the computer to external stimuli is
vital is a real time system.
• Timely means the real time system runs tasks that have deadlines.
• Deadline – Time constraint
Example : Controlling an aircraft ( Hard real time)
Missile Guidance (Hard real time)
Air traffic control systems ( Hard real time)
Multimedia ( Soft real time )
REAL TIME CONCEPTS

• Example : Data Recording System

Receive Data Store Data Storage

Receive Data Buff Store Data Storage


REAL TIME CONCEPTS
When is a system Real Time ?

• All practical systems are said to be real time systems, if they meet deadlines.
• A word processor which gives response within certain time (eg:1sec) is
considered acceptable.

Soft Real Time


The systems whose performance degrades but not destroyed because of not
meeting deadlines.
Example: Multimedia Streams

Hard Real Time


The system fails because of not meeting deadlines.
Example: Controlling an aircraft
REAL TIME CONCEPTS
Events

• Definition
Any occurrence that causes the program counter to change non-sequentiality
is considered a change of flow-of-control, and thus an event.

receive process

Void *Receive(void*) Void *Process(void*)


{ recvfrom(); { pthread_wait();
pthread_signal(); process data;
} }
REAL TIME CONCEPTS
Determinism

•Definition
A system is said to be deterministic if, for each possible state and each set of
inputs, a unique set of outputs and next state of the system can be determined.

idle takeoff Set course


REAL TIME CONCEPTS
Determinism

• For any physical system certain states exist under which the system is
considered to be out of control.
• The software controlling such a system must avoid these states.

Example:
In certain guidance systems for robots or aircraft, rapid rotation
through a 180’ pitch angle can cause a physical loss of gyro control.
• The software ability to predict next state of the system given the current
state and a set of inputs.
REAL TIME CONCEPTS
Time Loading

• Time-Loading, or the utilization factor, is a measure of the percentage of


useful processing the computer is doing.
REAL TIME CONCEPTS
Real Time Design Issues

• The selection of hardware and software, and the appropriate mix needed for
cost effective solution.
• Decide upon existing commercial RTOS or to design a special OS.
• Selection of an software language for system development.
• Maximize fault tolerance and reliability thru design and rigorous testing.
• selection of test and development equipment.
REAL TIME CONCEPTS
Examples of Real Time
Systems
1. Aircraft Navigation System

• Receive x,y,z accelerometer pulses at 5 millisecond rate.


• Receive roll, pitch and yaw angles from a special equipment every 40
millisecond rate.
• Receive temperature at 1 second rate.
• Determine the velocity vector at a 40 millisecond rate.
• Output the true velocity every 1 second to the pilots display.
REAL TIME CONCEPTS
Examples of Real Time
Systems
2. Nuclear Plant Monitoring System

• Security event trigger must be responded within a second.


• An event trigger indicates that the nuclear core has reached an over-
temperature and this signal must be dealt within a 1 millisecond.
REAL TIME CONCEPTS
Examples of Real Time
Systems
3. Airline Reservation System

• Transaction turn-around time must be less than 15 seconds.


• At any time several agents must access the database and
book the ticket.
Language Issues
Parameter Passing

Pass by Value

• Parameters that are passed by value are typically copied


onto stack at run-time, at considerable execution time cost.
• Pascal , C and Ada support this feature.
Language Issues
Parameter Passing

Pass by Reference

• Indirect instructions are needed for manipulating variables


that are passed by reference.
• takes more time for the execution
• Pascal , C and Ada support this feature.
Language Issues
Parameter Passing

Pass by Reference vs Value

• The relative advantages and disadvantages depend upon


compiler, coding style, target hardware, type of application
code, and size of parameters.
Language Issues
Parameter Passing

Pass by Reference vs Value

• The relative advantages and disadvantages depend upon


compiler, coding style, target hardware, type of application
code, and size of parameters.
Language Issues
Recursion

• procedure is self referential.


• procedure calls require allocation of storage for parameters,
local variables, etc.,
•allocation and deallocation consumes time.
• we need to determine run time memory.
• re-entrant code must be protected by semaphores.
Language Issues
Dynamic Allocation

• Dynamic allocation is important for the construction and


maintenance of the stacks needed by the real time operating
systems.
• although it is time consuming – it is necessary
Language Issues
Typing

• Each variable and constant be of specific type eg:int, real


• Strongly typed languages prohibit mixing of different types
in operations and assignments.
Advantages:
- Avoid the truncation of data.
Language Issues
Exception Handling

• Certain languages provide facilities for dealing with errors


or other conditions that arise during program execution.
• During run time when an exception occurs certain code is
executed to handle it ( exception handler).
Language Issues
Abstract Data Typing

• Abstract representation of entities.


• Program design easier and clearer.
• Example:- structures and records, enumerated data types
• Does not improve the real-time performance.
Language Issues
Object Oriented Languages

Features: Abstraction
Inheritence
Polymorphism

• poly and inheritence – delay factors


• program written in C++ is 43% slower than C.
• delays result from automatic storage management.
• clearer design and better maintainability.
Chapter 3
Real Time Kernel

Polled Loop Systems

• Polled Loop Systems are the simplest real-time kernel.


• Polled loops allow for fast response to single devices.
• A single repetitive test instruction is used to test a flag.
• If the event has not occurred then the polling continues.
• No Intertask Communication is required because only single task exists.
• Polled loop schemes work well when a single processor is dedicated to
handling I/O.
Chapter 3
Real Time Kernel

Polled Loop with Interrupts

• A variation on the polled loop uses a fixed clock interrupt to wait a period of
time between when the flag is TRUE and when the flag is set to FALSE.
• Waiting avoids interpreting settling oscillations as events.
• A delay period can be realized with a programmable timer that issues an
interrupt after a countdown period.
• while TRUE do
•Begin
•if FLAG = TRUE then begin
•counter = 0;
Chapter 3
Real Time Kernel

Polled Loop with Interrupts

•while TRUE do
•Begin
•if FLAG = TRUE then begin
•counter = 0;
While counter < 3
FLAG = FALSE;
process event;
End
End
Chapter 3
Real Time Kernel

Polled Loop with Interrupts

•Polled loop systems are excellent for high speed data channels.
• The processor is dedicated to handling the data channel.

•Dis:
• waste cpu incase of infrequent event.
Chapter 3
Real Time Kernel

PHASE/STATE DRIVEN CODE

•The processing of the function is broken into discrete segments.


• uses nested is then else statements or finite automata.
• ex: complier- lexical, syntax, code generation etc.,

•Case 1: perform part1;


•Case 2:perform part2;
Chapter 3
Real Time Kernel

Co-routines
•These types of kernels are employed in conjunction with code driven by finite
state automata.
• In this scheme two or more processes are coded in the state driven fashion.
• After each phase a call is made to the central dispatcher.
• The dispatcher selects next process to execute. This process continues until the
next phase is complete and the central dispatcher is called again.
• processes communicate via global variables.
•Dis: communication via global variables
Chapter 3
Real Time Kernel

INTERRUPT DRIVEN SYSTEMS

• The tasks are scheduled via hardware or software interrupts.


• Dispatching is performed by interrupt handling routines.
• The interrupts may occur at fixed rate – periodically or aperiodically.
•Sporadic tasks - interrupts occur aperiodically.
• fixed rate systems – interrupts occur at fixed frequencies.
• when hardware scheduling is used a clock or external device issues signals.
Chapter 3
Real Time Kernel

CONTEXT SWITCHING

• Context switching is the process of saving and restoring sufficient information


for a real time task so that it can be resumed after being interrupted.
• The context is saved to a stack data structure.

Context saving rule:


• Context switching is a major contribution to response times.
• is a factor that we strive to minimize.
• save minimum information necessary to restore
Chapter 3
Real Time Kernel

CONTEXT SWITCHING

• Contents of registers
• pc
• Coprocessor registers
• Memory page registers
•Special variables
• memory mapped I/O location mirror images
• Interrupts are disabled during the critical context switching period.
Chapter 3
Real Time Kernel

Stack Model

• The stack model for context switching is used mostly in embedded systems.
• The context is saved by the interrupt controller.
Chapter 3
Real Time Kernel

Round-Robin Systems

• Several processes are executed sequentially to completion.


• Each task is assigned a time quantum.
• A fixed rate clock is used to initiate an interrupt at a rate corresponding to the
time slice.
• The task switches its context after the time quantum.
• The context of the next task is resumed for the execution.
Chapter 3
Real Time Kernel
Preemptive Priority Systems

• A high priority task is said to preempt a lower priority task.


• The systems that use preemption schemes instead of round robin or first come
first serve scheduling are called preemptive priority systems.
• The priorities assigned to each interrupt are based on the urgency of the task
associated with that interrupt.
• Prioritized interrupts can be either fixed priority or dynamic priority.
• Fixed priority systems are less flexible in that the task priorities cannot be
changed.
• Dynamic priority systems can allow the priorities of tasks to change. This
feature is particularly important in certain types of threat management systems.
Chapter 3
Real Time Kernel
Preemptive Priority Systems

• Preemptive priority schemes can lead to the hogging of resources by higher


priority tasks. This can lead to a lack of available resources for lower priority
tasks.
• The lower priority tasks are said to be facing a problem called starvation.
Chapter 3
Real Time Kernel
Rate Monotonic systems

• The priorities are assigned so that the higher the execution frequency , the
higher the priority.
• This scheme is common in embedded applications, particularly avionics
systems, and has been studied extensively.
• In rate-monotic systems priority inversion may necessarily occur.

Where priority inversion is required


• A low priority process with high frequency of execution is assigned a high
priority.
• A high priority task with low frequency of execution is assigned a low priority.
Chapter 3
Real Time Kernel
Rate Monotonic systems

Where priority inversion is required


• A lower priority routine holds a resource using a semaphore that a high priority
routine needs.

Process1 Process2

semlock(); semlock();

for I = 0 to 100 for I = 0 to 100

a [I] = a[I] + 1*I; a [I] = a[I] + 2*I;

semunlock(); semunlock();
Chapter 3
Real Time Kernel
Hybrid systems

• Include interrupts that occur at fixed rates and sporadically.


• Sporadic systems are used to handle critical errors that requires immediate
attention, and thus have highest priority.
• Another type of hybrid system found in commercial operating systems is
combination of round-robin and preemptive systems.
• In these systems tasks of highest priority can always those of lower priority.
• However, tasks of same priority are ready to run simultaneously, then they run
in round-robin fashion.
Chapter 3
Real Time Kernel
6.5 FOREGROUND / BACKGROUND SYSTEMS

• These systems are the most common solution for embedded applications.
• They involve set of real time processes called the foreground and a collection
of noninterrupt driven processes called background.
• The foreground tasks run in round-robin, preemptive priority, or combination
fashion.
• The background task is fully preemtable by any foreground task and, in sense,
represents the lowest priority task in the system.
• All real time systems are special cases of foreground/background systems.
Chapter 3
Real Time Kernel
6.5.1 BACKGROUND Processing

• Any thing that is not time critical.


• Background process is the process with the lowest priority.
• The rate at which the background process runs is ver low and depends upon the
time-loading factor.
• if “ p” time loading factor for all foreground processes.
• if “ e” is the execution time of background process.
• then background process execution period “ t” is given by
t = e / (1-p)
Examples: Self testing in the background.
display updates.
Chapter 3
Real Time Kernel
6.5.2 Initialization

• Disable Interrupts
• Set up Interrupt vectors and stacks
• perform self test
• perform system initialization
• Enable Interrupts
Chapter 3
Real Time Kernel
6.5.3 Real Time Operation

• Foreground/Background systems represent a superset of all rael time solutions.


• They have good response times , since they rely on hardware to perform
scheduling.
Drawback
• Interfaces to complicated devices and networks must be written.
• this procedure can be tedious and prone to error.
Chapter 3
Real Time Kernel
6.6 FULL FEATURED REAL TIME OPERATING SYSTEMS

• Foreground/Background systems can be extended into an real time OS by


adding additional functions such as network interfaces, complicated device
drivers, and complex debugging tools.
• such systems rely on RR-preemptive scheduling
• OS – is a high priority task
Chapter 3
Real Time Kernel
6.6.1 Task Control Block Model
• This is the most popular method for implementing commercial, full featured,
real time OS.
• It is useful in interactive on line systems where tasks come and go.
• it is used in RR-preemptive systems
6.6.1.1 The Model
• Each task is associated with a context ( pc and regs), an ID, status flag anf
priority. These are stored in a structure called TCB. And the collection is stored
in list.
6.6.1.2 Task States
• A task can take one of the following states
Executing, Ready, Suspended, Dormant
Chapter 3
Real Time Kernel

6.6.1.3 Task Control Block Model

Executing – After creation


Ready – After time slice
Suspended – After an event
Dormant – After deletion
Chapter 3
Real Time Kernel

6.6.1.3 Task Management


• It is the highest priority task
• Every hardware interrupt and every system level call invokes the real time OS.
•TCB – Linked list
• Suspended States – Linked List
• Table of resources
• Table of resource requests.
•When OS is invoked it checks ready list – if next task is ready for execution
•If eligible – then the TCB of currently executing task moves to the end of the
list
• Eligible list is removed from the ready list and added to exec list
Chapter 3
Real Time Kernel

6.6.1.3 Task Management


• It also checks status of resources in suspended list.
• if a task is suspended on a resource it enters ready state.
Chapter 4
Inter task Communication

7.1 Buffering Data


• Several mechanisms can be employed to pass data between tasks in a
multicasting system.
• The simplest and the fastest mechanism among these is the use of global
variables.
• Global variables are contrary to good software engineering practices.
• One of the problems with global variables is that tasks of highest priority can
preempt lower-priority routines, corrupting global data.
• The producer can produce the data in buffer and Consumer can read the data
from the buffer.
• The buffer can be stack or some other data structure, including an unorganized
mass of variables.
Chapter 4
Inter task Communication
7.1.1 Time Relative Buffering

• Double buffering or ping–pong, this technique can be used when time-relative


data need to be transferred between cycles of different rates.
•Examples:
- Disk controllers
- Graphical Interfaces
- robot controls

Producer Buffer1 Buffer2 Consumer


Chapter 4
Inter task Communication
7.1.2 Ring Buffers

• A special data structure called a circular queue or ring buffer is used in the
same way as a first in first out.
• Ring buffers are easy to manage.
• In the ring buffer, simultaneous I/O are achieved by keeping head and tail
pointers.
• Data are loaded at the tail and read from the head.
• An implementation of ring buffer is given below;
Chapter 4
Inter task Communication
7.1.2 Ring Buffers

• procedure read Procedure write

if head = tail if (tail + 1)mod N = head then


overflow
underflow
else
else
s[tail] = data
data = S[head]
tail = (tail + 1) mod N
head = (head+1) mod N
end
end End
end
Chapter 4
Inter task Communication
7. 2 Mail Boxes

• A mail box is a mutually agreed upon memory location that two or more
tasks can use to pass data.
• The tasks rely on central scheduler, to write to the location via post operation
and read via pend operation.
• The data can be single piece of data or pointer to a larger data structure such
as a list or an array.
• In most implementations when the key is taken from the mailbox, the
mailbox is emptied.
• Thus several tasks can pend on the mailbox, only one task can receive the
key, since key represents the access to a critical resource.
Chapter 4
Inter task Communication
7. 2 Mail Boxes

Procedure post
mailbox gets data
End

Procedure pend
Task reads content of mailbox in data, otherwise suspend the task
End

Task #100 Mail Box Printer


Chapter 4
Inter task Communication
7. 2.3 Ring buffer and queues to control pool of devices

To device
requests

server server
Ring buffer
Chapter 4
Inter task Communication
7. 3 Critical Regions

• Critical region is a part of code that operates on shared resource.


• Two tasks should not be allowed to enter critical region.
• Simultaneous use of reusable resources is called collision.
• Hence, collision must be prevented.
Chapter 4
Inter task Communication
7. 4 Semaphore

• Semaphore is mechanism used to protect critical region.


• It consists of a variable which acts as a lock to protect resources.
• Two operations wait and signal are used to set or reset the semaphore.
• The primitive operations are defined by the code below;
Procedure wait(S)
while S = TRUE do;
S = TRUE;
End
Chapter 4
Inter task Communication
7. 4.1 mailboxes and Semaphores

•Mail boxes can be used to implement semaphores if semaphore primitives are


not provided by the operating system.
• In this, there is an advantage that the pend instruction suspends the waiting
process rather than actually waiting on the semaphore.
•Proc wait(T)
pend(temp,T)
End
Proc signal(T)
post(Key,T)
End
Chapter 4
Inter task Communication
7. 4.2 Counting Semaphores

•Used to protect pools of resources.


• Semaphore must be initialized to number of resources.
• Proc Wait(Cs)
•Cs = Cs – 1
•While Cs < 0 do {wait}
End
Proc Signal(Cs)
Cs = Cs + 1
End
Chapter 4
Inter task Communication
7. 4.3 Problems with Semaphores

•LOAD R1,S
•TEST R1,1
•JEQ @1 S = = TRUE ????????
•STORE S,1 S = TRUE
Chapter 4
Inter task Communication
7. 4.4 The Test-and-Set Instruction

• To solve the problem of atomic operation between testing a memory location


and storing a specific value in it, some instruction sets provide a test-and-
instruction.
• The main idea is to combine testing and storing functions into a single
atomic operation.
• The test and set instruction fetches a word from the memory and tests the
high order bit, if the bit is zero, it is set to 1 and a condition code of zero is
returned. If the bit is 1, a condition code of 1 is returned.
Chapter 4
Inter task Communication
7. 4.4 The Test-and-Set Instruction

Procedure wait(S)
while test-and-set(S) = TRUE do { wait }
S = TRUE;
End

Procedure Signal(S)
begin
S = FALSE;
End
Chapter 4
Inter task Communication
7. 4.4 The Test-and-Set Instruction

The procedure wait would generate the assembly language code that may look
like;
loop:
TANDS S
JNE loop
Chapter 4
Inter task Communication
7. 5 Event Flags and Signals
• Certain Languages provide for synchronization mechanisms called event
flags.
• These constructs allow for the specification of an event that causes the
setting of some flag.
• A second process is designed to react to this flag.
• Event flags in essence represent simulated interrupts, created by the
programmer.
• Raising the event transfers flow-of-control to the operating system, which
can then invoke the appropriate handler.
• Tasks that are waiting for the occurrence of an event are said to be blocked.
Chapter 4
Inter task Communication
7. 5 Event Flags and Signals
• For Example:
In ANSI-C, the raise and signal facilities are provided.
Signal is a type of s/w interrupt handler, that is used to react to an
exception indicated by raise operation.
Chapter 4
Inter task Communication
7. 6 Deadlock
• When tasks are competing for the same set of two or more serially
reusable resources, then a deadlock situation may ensue.
• The notion of deadlock is illustrated by an example below;
Procedure Task_A; Procedure Task_B
Wait(S); Wait( R);
Update database file; update Array;
Wait( R); Wait(S);
Update Array; Update database file
Signal(R); Signal( S);
Signal(S) ; Signal( R);
End End
Chapter 4
Inter task Communication
7. 6 Deadlock
• Deadlock is a serious problem because it cannot always be detected
through testing.
• Starvation – At least one process satisfies its requirements.
• Deadlock – No processes can satisfy their requirements because all
are blocked.
Four Conditions necessary for deadlock situation;

- Mutual Exclusion.
- Circular Wait.
- Hold and Wait.
- No Preemption.
Chapter 4
Inter task Communication
7. 6 Deadlock
Mutual Exclusion : -
• Mutual exclusion applies to those resources that are not sharable.
• Example:- printers, disk devices and output channels.
• Mutual exclusion can be removed by making them sharable to an
application task.
Circular Wait : -
• Circular wait condition occurs, when several processes exist that hold
resources needed by other processes further down the chain.
• One way to impose an ordering on resources and to force all
processes to request resources in increasing order of enumeration.
Chapter 4
Inter task Communication
7. 6 Deadlock
Device Number
Disk 1
Printer 2
Motor Control 3
Monitor 4
Chapter 4
Inter task Communication
7. 6 Deadlock
Hold and Wait: -
• The hold and wait condition occurs when processes request resource
and then lock that resource until subsequent resource requests are
filled.
• One solution to this problem is to allocate to a process all potentially
required resources at the same time.
• This leads to starvation.
• Never allow a process to lock more than one resource at a time.
Chapter 4
Inter task Communication
7. 6 Deadlock
No Preemption: -
• No Preemption can lead to a deadlock.
• If a low priority process holds a resource protected by semaphore S,
and if a high priority task interrupts and then waits on semaphore S,
the priority inversion will cause the high priority task to wait forever,
since the low-priority task can never run to release the resource and
signal the semaphore.
• If we allow the higher priority task to preempt the lower one, then the
deadlock can be eliminated.
Chapter 4
Inter task Communication
7. 6.1 Deadlock Avoidance
• Several Techniques for avoiding deadlock are available;
- If the semaphores protecting critical regions are implemented by
mailboxes with timeouts, then deadlock cannot occur. But starvation
occurs.
- Allow preemption- High priority tasks should preempt low priority
tasks.
- Eliminate Hold and Wait Condition.
- Another technique uses bankers algorithm. This a slow technique
for real time systems.
Chapter 4
Inter task Communication
7. 6.2 Detect and Recover
• One algorithm called ostrich algorithm, advises that the problem must be
ignored, if it occurs infrequently.
• Another method says to restart the system completely, this may not be
acceptable for some critical systems.
• If a deadlock is detected rollback to a pre-deadlock state.
Chapter 5
System Performance Analysis And Optimization
Key Points of the Chapter

• Noninterrupt driven systems can be analyzed for real time behavior.


• It is impossible predict the performance of interrupt driven systems.
• The performance is studied based on following parameters;
- Response Time:- time between receipt of an interrupt and
completion of processing.
- Time Loading :- Percentage of time CPU is doing useful processing
- Memory Loading:- Percentage of memory that is being used.
Chapter 5
System Performance Analysis And Optimization
9.1 Response Time Calculation

• Depends upon the type of the system.


Polled Loops:-
• The response time delay consists of three components
- Hardware delay required to set the software flag by some external
device.
- The time required to test the flag.
- The time needed to process the event.
RT = Set flag (nanosecs) + Check flag (micro) + Process Event (milliseconds)
Overlapping events are not allowed.
Chapter 5
System Performance Analysis And Optimization
9.1 Response Time Calculation

Coroutines / Phase Driven code:-


• The response time is calculated by tracing the execution path.
•Proc A
•While TRUE do
•Case phase1:
•Case pahse2:
•Case phase3:
•End
Chapter 5
System Performance Analysis And Optimization
9.1 Response Time Calculation
Interrupt Systems:-
• The response time is calculated by considering following factors;
RT = Interrupt Latency + Context Switch Time +
Schedule Time + Process Interrupt Time.
RT = Li + Cs + Si + Ai
Interrupt Latency: -
•It is the delay when an Interrupt occurs and when the CPU begins reacting to
it.
Chapter 5
System Performance Analysis And Optimization
9.1 Response Time Calculation
Interrupt Latency: -
•There are two cases;
• Case 1: When high priority interrupt occurs;
Li = Lp + max { Lm,Ld};
Lp = Propagation delay of the interrupt signal.
- Time when an Ext-device initiates a signal and when it actually
latches to Interrupt Controller.
- Depends upon of transit time of electrons across wires.
- and switching speed of Interrupt Controller.
Lm = is the longest completion time of an instruction in the interrupted
process.
Chapter 5
System Performance Analysis And Optimization
9.1 Response Time Calculation
Interrupt Latency: -
•There are two cases;
• Case 1: When high priority interrupt occurs;
Li = Lp + max { Lm,Ld};
Ld = Max Time the interrupts are disabled by lower priority tasks.

• Case 2: When low priority interrupt occurs;


Li = Lh
Lh = Time needed to complete high priority routines.
= Impossible to determine.

You might also like