You are on page 1of 9

CHAPTER 25

INPUT/ OUTPUT AND FILE SYSTEM MANAGEMENT


DEFINITION:
An operating system is a computer program that manages the hard and
software resources of a computer. It provides the interface between
application programs and the system hardware. In general, an OS for
embedded .control systems has the following responsibilities Task
Management

and

interrupt

servicing, inter

process

communication and memory management


CONCEPT:

Embedded OSes provide memory management support for a temporary or


permanent file system storage scheme on various memory devices, such as

flash, Rom or hard disk.


File systems are essentially a collection of files along with their management
protocols.

FILE SYSTEM MANAGEMENT STANDARDS


File System

Summary

FAT32

Where memory is divided into the smallest unit

(File

Allocation possible (called sectors).

Table)

A group of sectors is called a cluster. An OS assigns


a unique
number to each cluster, and tracks which files use
which clusters.
FAT32 supports 32-bit addressing of clusters, as well
as smaller cluster
sizes than that of the FAT predecessors (FAT, FAT16,
etc.)

NFS
(Network

Based on RPC (Remote Procedure Call) and XDR


File (Extended Data

System)

Representation), NFS was developed to allow


external devices to
mount a partition on a system as if it were in local
memory. This allows
for fast, seamless sharing of files across a network.

FFS

Designed for Flash memory.

(Flash File System)


DosFS

Designed for real-time use of block devices (disks)


and compatible
with the MS-DOS file system

RawFS

Provides a simple raw file system that essentially


treats an entire disk
as a single large file.

TapeFS

Designed for tape devices that do not use a standard


file or directory
structure on tape. Essentially treats the tape volume
as a raw device
in which the entire volume is a large file.

In relation to file systems, a kernel typically provides file system management


mechanisms
for at the very least:

Mapping files onto secondary storage, Flash, or RAM (for instance).


Supporting the primitives for manipulating files and directories.

FILE DEFINITIONS AND ATTRIBUTES: Naming Protocol, Types (i.e.,


executable, object,
source, multimedia, Sizes, Access Protection (Read, Write, Execute,
Append,

Delete, etc.), Ownership, and so on.

FILE OPERATIONS: create, Delete, Read, Write, Open, Close, and so


on.
FILE ACESS METHODS: Sequential, Direct, and so on.
Directory access deletion and creation.

OSes vary in terms of the primitives used for manipulating files

what

memory devices files can


be mapped to and what file systems are supported.

Most OSes use their standard I/O interface between the file systems and the

memory device drivers.


This allows for one or more file systems to operate in conjunction with the

operating system.
I/O Management in embedded OSes provides an additional abstraction layer
away from the systems
hardware and device drivers.

An OS provides a uniform interface for I/O devices that perform a wide


variety of functions via the
available kernel system calls, providing protection to I/O devices since user

processes can only access


I/O via these system calls, and managing a fair and efficient I/O sharing
scheme among the multiple
Processes.

An OS also needs to manage synchronous and asynchronous communication


coming
from I/O to its processes, in essence be event-driven by responding to

requests from both

sides (the higher level processes and low-level hardware), and manage the
data transfers.

In order to accomplish these goals, an OSs I/O management scheme is


typically made up
of a generic device-driver interface both to user processes and device drivers,

as well as some
type of buffer-caching mechanism.
Device driver code controls a boards I/O hardware. In order to manage I/O,
an OS may
require all device driver code to contain a specific set of functions, such as

start-up, shutdown,
enable, disable, and so on.

A kernel then manages I/O devices, and in some OSes files systemsas well, as

black boxes
that are accessed by some set of generic APIs by higher-layer processes.
OSes can vary widely in terms of what types of I/O APIs they provide to
upper
layers. For example, under Jbed, or any Java-based scheme, all resources

(including I/O) are


viewed and structured as objects. Vx Works, on the other hand, provides a
communications
mechanism, called pipes, for use with the vxWorks I/O subsystem. Under
vxWorks, pipes are
virtual I/O devices that include underlying message queue associated with
that pipe. Via the
pipe, I/O access is handled as either a stream of bytes or one byte at any
given time.

In some cases, I/O hardware may require the existence of OS buffers to

manage data
transmissions.
Buffers can be necessary for I/O device management for a number of reasons.
Mainly they are needed for the OS to be able to capture data transmitted via

block access.
The OS stores within buffers the stream of bytes being transmitted to and
from an I/O device
independent of whether one of its processes has initiated communication to

the device. When


performance is an issue, buffers are commonly stored in cache (when
available), rather than

in slower main memory

OS Performance:
The two subsystems of an OS that typically impact OS performance the most, and
differentiate
the performance of one OS from another, are the memory management scheme
(specifically the process swapping model implemented) and the scheduler. The
performance
of one virtual memory-swapping algorithm over another can be compared by the
number of
page faults they produce, given the same set of memory referencesthat is, the same
number
of page frames assigned per process for the exact same process on both OSes. One
algorithm
can be further tested for performance by providing it with a variety of different
memory references
and noting the number of page faults for various number of page frames per process
configuration
While the goal of a scheduling algorithm is to select processes to execute in a scheme
that
maximizes overall performance, the challenge OS schedulers face is that there are a
number
of performance indicators. Furthermore, algorithms can have opposite effects on an
indicator,
even given the exact same processes.
The main performance indicators for scheduling algorithms include:

Throughput: which is the number of processes being executed by the CPU


at any

given time. At the OS scheduling level, an algorithm that allows for a significant
number of larger processes to be executed before smaller processes runs the risk of
having a lower throughput. In a SPN (shortest process next) scheme, the throughput
may even vary on the same system depending on the size of processes being executed
at the moment.

Execution time: The average time it takes for a running process to execute
(from

start to finish). Here, the size of the process affects this indicator. However, at the
scheduling level, an algorithm that allows for a process to be continually pre-empted

allows for significantly longer execution times. In this case, given the same process, a
Comparison of a non-preemptable vs. preemptable scheduler could result in two very
different execution times.

Waiting time: The total amount of time a process must wait to run. Again
this depends on

whether the scheduling algorithm allows for larger processes to be executed before
slower processes. Given a significant number of larger processes executed (for
whatever
reason), any subsequent processes would have higher wait times. This indicatoris also
dependent
on what criteria determines which process is selected to run in thefirst placea
process in one scheme
may have a lower or higher wait time than if it isplaced in a different scheduling
scheme.
On a final note, while scheduling and memory management are the leading
components
impacting performance, to get a more accurate analysis of OS performance one must
measure
the impact of both types of algorithms in an OS, as well as factor in an OSs response
time. While no
onefactor alone determines how well an OS performs, OSperformance in general can
be implicity
estimated by how hardware resources in the systemare utilized for the variety of
processes. Given the right
processes, the more time a resource spends executing code as opposed to sitting idle
can be indicative of a
more efficient OS.

Board Support Packages(BSPs)


DEFINITION:
Oses and in embedded systems, a board support packages (BSP) is an implementation
of specific support code (software) for a given (device motherboard) board that
conforms to a given operating system. It is commonly built with a boot loader that
contains the minimal device support to load the operating system and device
drivers for all the devices on the board.

CONCEPT:
Some suppliers also provide a root file system, a tool chain for making programs to
run on the embedded system ,and configurations for the devices

The board support package is an optional component provided by the OS


provider, the main purpose
of which is simply to provide an abstraction layer between the operating

system and generic device


drivers.
A BSP allows for an OS to be more easily ported to a new hardware
environment, because it
acts as an integration point in the system of hardware dependent and

hardware independent
source code.

A BSP provides subroutines to upper layers of software that can customize


the
hardware, and provide flexibility at compile time. Because these routines

point to separately
compiled device driver code from the rest of the system application software,
BSPs provide
run-time portability of generic device driver code.

management, and an API for


the OS to access generic device drivers.
A BSP is alsoresponsible for managing the initialization of the device driver

and OS in the
system.
The device configuration management portion of a BSP involves

BSP

provides

architecture-specific

device

driver

configuration

architecture-specific device
driver features, such as constraints of a processors available addressing
modes, endianess,
and interrupts and so on, and is designed to provide the most flexibility in
porting generic device
drivers to a new architecture-based board, with its differing endianess,
interrupt scheme, and other
architecture-specific features.

BSP WITHIN EMBEDDED SYSTEMS MODEL

Example
The Wind River board support package for the ARM Integrator 920T board contains,
among other things, the following elements:

A config.h file, which defines constants such as ROM_SIZE and


RAM_HIGH_ADRS.

A Make file, which defines binary versions of VxWorks ROM images for
programming into flash memory.

A boot Rom file, which defines the boot line parameters for the board.

A target.ref file, which describes board-specific information such as switch


and jumper settings, interrupts levels, and offset bias.

A VxWorks image.

Various C files, including:


flashMem.c the device driver for the board's flash memory
pciIomapShow.c mapping file for the PCI bus
primeCellSio.c TTY driver
sysLib.c system-dependent routines specific to this board
romInit.s ROM initialization module for the board; contains entry code for
images that start running from ROM
Additionally the BSP is supposed to perform the following
operations

Initialize the processor

Initialize the bus

Initialize the interrupt controller

Initialize the clock

Initialize the RAM settings

Configure the segments

Load and run boot loader from flash

You might also like