You are on page 1of 20

Scheduling Algorithmic Research

Rami Abielmona
94.571 (ELG 6171)
Monday March 27, 2000
Prof. T. W. Pearce
Scheduling Algorithms
Introduction

• Problem definition:
– “One CPU with a number of processes. Only
one process can use the CPU at a time and each
process is specialized in one, and one task.
What’s the best way to organize the processes
(schedule them) ?” [1]
– “How will the CPU time be divided among the
processes and threads competing to use it ?” [2]
Scheduling Algorithms
Embedded OS Architecture
• Kernel:
– The executor
• Executive:
– The manager
• Application Programs:
– The programmer tasks
• Real World Interfacing:
– S/W handling the H/W
Scheduling Algorithms
Basic Assumptions
• A pool of runnable processes are contending for one CPU;
• The processes are independent and compete for resources;
• The job of the scheduler is to distribute the scarce resource
of the CPU to the different processes “fairly” and in an
optimal way;
• The job of the dispatcher is to provide the mechanism for
running the processes;
• The OS is a multitasking, but not a multiprocessor, one;
• Only CPU scheduling is considered (the lowest level of
scheduling).
Scheduling Algorithms
Evaluation Characteristics
CPU utilization Keep it as high as
possible
Throughput Number of processes
completed per unit
time
Waiting time Amount of time spent
ready to run but not
runnin
Response time Amount of time
between submission of
requests and first
response to the request
Scheduler efficiency Minimize the
overhead
Turnaround time Mean time from
submission to
Scheduling Algorithms
Processes and Resources
• Resources: • Processes:
– Preemptible: – IO bound:
• Take resource away, use • Perform lots of IO
it for something else, operations.
then give it back. (e.g. • IO burst ---- short CPU burst
processor or I/O channel) to process IO --- IO burst
– Non-preemptible: – CPU bound:
• Once give, it can’t be • Perform lots of
reused until process gives computation and do
it back. (e.g. file space or little IO
terminal) • CPU burst ----------- Small IO
burst ----------- CPU burst
Scheduling Algorithms
Process State Transitions
• The states of a process, at any
given time, is comprised of the
following minimal set:
– Running:
• The CPU is currently executing
the code belonging to the
process.
– Ready:
• The process could be running,
but another process has the
CPU.
– Waiting:
• Before the process can run,
some external event must
occur.
Scheduling Algorithms
Types of Schedulers
• Long-term scheduler:
– admits new processes to the system;
– required because each process needs a portion of the
available memory for its code and data.
• Medium-term scheduler:
– is not found in all systems;
– required to control the temporary removal from memory of a
process when the latter is extractable.
• Short-term scheduler:
– determines the assignment of the CPU to ready processes;
– required because of IO requests and completions.
Scheduling Algorithms
The Contestants (1)
• First-Come First-Serve (FCFS)
– One ready queue;
– OS runs the process at head of the queue;
– New processes come in at the end of the queue;
– Running process does not give up the CPU until it terminates
or it performs IO.
• Round Robin
– Process runs for one time slice, then moved to back of the
queue;
– Each process gets equal share of the CPU.
Scheduling Algorithms
The Contestants (2)
• Shortest Time to Completion (STCF)
– Process with shortest computation time left is picked;
– Varianted by preemption;
– Requires knowledge of the future.
• Exponential Queue (Multi-level Feedback)
– Gives newly runnable processes a high priority and a very
short time slice;
– If process uses up the time slice without blocking then
decrease priority by one and double time slice for next time;
– Solves both efficiency and response time problems.
Scheduling Algorithms
The Contestants (3) - Priorities
• Priority Systems
– The highest priority ready process is selected
– In case of a tie, FCFS can be used
– Priorities could be assigned:
• Externally (e.g. by a system manager)
• Internally (e.g. by some algorithm)
• Combination of external and internal
– Preemptive schemes:
• Once a process starts executing, allow it to continue until it
voluntarily yields the CPU
– Non-preemptive schemes:
• A running process may be forced to yield the CPU by an external
event rather than by its own action
Scheduling Algorithms
First-Come First-Serve

• Non-preemptive FCFS (no priority scheme)


– Simplest implementation of scheduling algorithms
– Used on timeshared systems (with timer interruption)
• Non-preemptive FCFS (with priority scheme)
– Next highest priority process is picked when CPU is yielded
– Once process grabs CPU, former keeps latter until completion
– Rarely used in real-time systems
• Preemptive FCFS (with priority scheme)
– Most popular FCFS algorithm
Scheduling Algorithms
Round Robin
• Used mostly on timeshared
systems
• Allows multiple users slices
of the CPU on a “round
robin” basis
• Majority of users have the
same priority
• Not a popular scheme with
dynamic priority systems
Scheduling Algorithms
Shortest Time to Completion

• Priorities are assigned in inverse order of time needed for completion


of the entire job
• Minimizes average turnaround time
• Exponential averaging is used to estimate the process’ burst duration
• A job exceeding the resource estimation is aborted
• A job exceeding the time estimation is preempted
• Store estimated value in PCB for the current burst, and compare with
actual value
Scheduling Algorithms
Exponential Queues
• Popular in interactive systems
• A single queue is maintained for each
priority level
• A new process is added at the end of the
highest priority queue
– It is alloted a single time quantum when it
reaches the front
• If it yields the CPU within the time
quantum, it is moved to the rear
• If not, it is placed at the rear of the next
queue down
• Dispatcher selects the head of the highest
priority queue
– A job that “succeeds” moves up
– A job that “fails” moves down
Scheduling Algorithms
Implementation - Data Structures
• A queue of processes is implemented by linking PCB’s together using a linked list (with first and last node
pointers)
• Since this project’s queues are known to be short, priority is implemented by using priority queues, and
PriorityInsert() function calls
• Different queues are used to represent different states of processes (Ready, Suspended)
• Self-release of CPU
– Internal signal
– Process completion
• Forced-release of CPU
– Time slot expired Process ID
– External signal
Status
Priority
Next Process
Scheduling Algorithms
Implementation - Progress
• Used an OO template in order to easily and
efficiently implement any necessary queue
• Used structurally defined functions to “simulate” the
scheduler and dispatcher
– fill_poolQ(), get_tasks(src,dest), sort(queue)
• Implemented FCFS, RR and STCF
– RR was implemented to fairly compare schemes
• Theoretical work still needs to be done
– comparison and evaluation
Scheduling Algorithms
Implementation - Issues
• The underlying interrupt system • Each process is allocated its
basically readies the task for a own private stack and
switch, but does not perform workspace
the switch • This is done to avoid different
• Process switches are directly processes overwriting each
handled by the scheduler other’s data and code
• This causes a delay from the • This is based on a strict process
time of readiness to the time of model, where all heavyweight
the switch, which is not processes do not share
tolerated for, let’s say, system resources
exceptions • Code that can be shared safely
• The solution is to completely is called ‘re-entrant’ code [1]
by-pass the scheduler (OS) and
go directly to an ISR. [1]
Scheduling Algorithms
Implementation - Analysis
• Direct analysis
– Pick a task set and observe results
• Apply queueing theory to obtain results
– Multi-level feedback queue scheme
• Simulations of scheme implementations
– FCFS, RR, STCF
• Innovations and projections
– “Lottery” scheduling and “own” algorithm
Scheduling Algorithms
References

1) Cooling, J.E. Software Design for Real-Time Systems. Chapman &


Hall, London, UK: 1995.
2) Stallings, William. Operating Systems: Internals and Design
Principles. Upper Saddle River, NJ: Prentice Hall, 1998.
3) http://www.cs.wisc.edu/~bart/537/lecturenotes/s11.html - viewed on
03/24/2000
4) Savitzky, Stephen. Real-Time Microprocessor Systems. Van Nostrand
Reinhold Company, N.Y.: 1985.
5) Undergraduate Operating System Course Notes (Ottawa University,
1998)

You might also like