You are on page 1of 16

Chapter 02 (Operating-System Structures)

Operating-System Services
An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs. These operating system services are
provided for the convenience of the programmer, to make the programming task easier.

Figure: A view of operating system services.


One set of operating-system services provides functions that are helpful to the user:
User interface - Almost all operating systems have a user interface (UI). This interface can
take several forms.
- Command-line interface (CLI) - which uses text commands and a method for entering
them.
- Batch interface - in which commands and directives to control those commands are
entered into files, and those files are executed.
- Graphical user interface (GUI) - the interface is a window system with a pointing
device to direct I/O, choose from menus, and make selections and a keyboard to enter
text.
Program execution - The system must be able to load a program into memory and to run
that program, end execution, either normally or abnormally (indicating error).
I/O operations - A running program may require I/O, which may involve a file or an I/O
device. Such as recording to a CD or DVD drive.

File-system manipulation - The file system is of particular interest. Programs need to read
and write files and directories, create and delete them, search them, list file Information,
permission management.
Communications Processes may exchange information, on the same computer or between
computers over a network. Communications may be implemented via - Shared memory - in which two or more processes read and write to a shared section
of memory.
- Message passing - in which packets of information in predefined formats are moved
between processes by the operating system.
Error detection OS needs to be constantly aware of possible errors
- May occur in the CPU and memory hardware, in I/O devices, in user program
- For each type of error, OS should take the appropriate action to ensure correct and
consistent computing
- Debugging facilities can greatly enhance the users and programmers abilities to
efficiently use the system
Another set of operating system functions exists not for helping the user but rather for
ensuring the efficient operation of the system itself.
Resource allocation - When multiple users or multiple jobs running concurrently, resources
must be allocated to each of them.
- Many types of resources - CPU cycles, main memory, file storage, I/O devices.
Accounting - To keep track of which users use how much and what kinds of computer
resources. This record keeping may be used for accounting or simply for accumulating usage
statistics. Usage statistics may be a valuable tool for researchers who wish to reconfigure the
system to improve computing services.
Protection and security - The owners of information stored in a multiuser or networked
computer system may want to control use of that information, concurrent processes should
not interfere with each other.
- Protection involves ensuring that all access to system resources is controlled.
- Security of the system from outsiders requires user authentication, extends to
defending external I/O devices from invalid access attempts.

User and Operating-System Interface


Command-Line Interface, or Command Interpreter It allows direct command entry. Some
operating systems include the command interpreter in the kernel. Others, such as Windows
and UNIX, treat the command interpreter as a special program that is running when a job is
initiated or when a user first logs on. On systems with multiple command interpreters to
choose from, the interpreters are known as shells.
For example, on UNIX and Linux systems, a user may choose among several different
shells, including the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others.
Third-party shells and free user-written shells are also available.
Many of the commands given at this level manipulate files: create, delete, list, print, copy,
execute, and so on.
Graphical User Interface, or GUI - A second strategy for interfacing with the operating
system is through a user friendly graphical user interface, or GUI. Here rather than entering
commands directly via a command-line interface users employ a mouse-based window andmenu system characterized by a desktop metaphor.
Icons represent files, programs, actions, etc. Various mouse buttons over objects in the
interface cause various actions (provide information, options, execute function, open
directory (known as a folder).
Most mobile systems, smartphones and handheld tablet computers typically use a
touchscreen interface. Here, users interact by making gestures on the touchscreenfor
example, pressing and swiping fingers across the screen.
Many systems now include both CLI and GUI interfaces
Microsoft Windows is GUI with CLI command shell.
Apple Mac OS X is Aqua GUI interface with UNIX kernel underneath and shells
available.
UNIX and Linux have CLI with optional GUI interfaces (Common Desktop
Environment (CDE), K Desktop Environment (KDE), and GNOME by GNU project).

System Calls
System calls provide an interface to the services made by an operating system. Typically written
in a high-level language (C or C++) though certain low-level tasks may have to be written using
assembly-language instructions. Mostly accessed by programs via a high-level Application
Programming Interface (API) rather than direct system call use. The API specifies a set of
functions that are available to an application programmer, including the parameters that are
passed to each function and the return values the programmer can expect.
The run-time support system (a set of functions built into libraries included with a compiler)
provides a system call interface that serves as the link to system calls made available by the
operating system. The system-call interface intercepts function calls in the API and invokes the
necessary system calls within the operating system. Typically, a number is associated with each
system call, and the system-call interface maintains a table indexed according to these numbers.

The system call interface then invokes the intended


system call in the operating-system kernel and
returns the status of the system call and any return
values. The caller need know nothing about how
the system call is implemented. Just needs to obey
API and understand what OS will do as a result call.
In fig illustrates how the operating system handles
a user application invoking the open() system call.
Fig: The handling of a user application invoking the open() system call.
EXAMPLE OF STANDARD C LIBRARY
The standard C library provides a portion of the
system-call interface for many versions of UNIX and
Linux. As an example, lets assume a C program
invokes the printf() statement. The C library
intercepts this call and invokes the necessary system
call (or calls) in the operating systemin this
instance, the write() system call. The C library takes
the value returned by write() and passes it back to
the user program. This is shown below:

Types of System Calls


System calls can be grouped roughly into six major categories: process control, file
manipulation, device manipulation, information maintenance, communications, and
protection.
Process control
End, abort
Load, execute
Create process, terminate process
Get process attributes, set process attributes
Wait for time
Wait event, signal event
Allocate and free memory
File management
create file, delete file
open, close
read, write, reposition
get file attributes, set file attributes
Device management
request device, release device
read, write, reposition
get device attributes, set device attributes
logically attach or detach devices
Information maintenance
Get time or date, set time or date
Get system data, set system data
Get process, file, or device attributes
Set process, file, or device attributes
Communications
Create, delete communication connection
Send, receive messages
Transfer status information
Attach or detach remote devices
Protection
Control access to resources
Get and set permissions
Allow and deny user access

Chapter 03 (Processes)

Process
A process is a program in execution. The execution of a process must progress in a sequential
fashion. Definition of process is following.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
A process is more than the program code, also called text
section.
It includes the current activity as represented by the value of the
program counter and the contents of the processors registers.
A process generally also includes the process stack, which
contains temporary data (such as function parameters, return
addresses, and local variables), and a data section, which
contains global variables. A process may also include a heap,
which is memory that is dynamically allocated during process run
time.
A program becomes a process when an executable file is loaded
into memory.
Fig: Process in Memory

Process State
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. A process may be in one of the following states:
1. New: The process is being created.
2. Running: Instructions are being executed.
3. Waiting: The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
4. Ready: The process is waiting to be assigned to a processor.
5. Terminated: The process has finished execution.

Fig: Diagram of process state

Process Control Block


Each process is represented in the operating system by a process control
block (PCB) also called a task control block. PCB is the data structure used
by the operating system. Operating system groups all information that
needs about particular process. PCB contains many pieces of information
associated with a specific process which are described below.
1. Process state
The state may be new, ready, running, waiting, halted, and so on.
2. Program counter
The counter indicates the address of the next instruction to be
executed for this process.
3. CPU registers
Fig: Process Control Block (PCB)
The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers, and general-purpose registers,
plus any condition-code information. Along with the program counter, this state
information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
4. CPU-scheduling information
This information includes a process priority, pointers to scheduling queues, and any
other scheduling parameters.
5. Memory-management information
This information may include such items as the value of the base and limit registers
and the page tables, or the segment tables, depending on the memory system used by
the operating system.
6. Accounting information
This information includes the amount of CPU and real time used, time limits, account
numbers, job or process numbers, and so on.
7. I/O status information
This information includes the list of I/O devices allocated to the process, a list of open
files, and so on.

Threads
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers. Traditional (heavyweight) processes have a single thread of control - There is one
program counter, and one sequence of instructions that can be carried out at any given time.
Multi-threaded applications have multiple threads within a single process, each having their
own program counter, stack and set of registers, but sharing common code, data, and certain
structures such as open files. Threads are very useful in modern programming whenever a
process has multiple tasks to perform independently of the others.

Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a multiprogramming operating system. Such operating
systems allow more than one process to be loaded into the executable memory at a time and
loaded process shares the CPU using time multiplexing.

Scheduling queues
Scheduling queues refers to queues of processes or devices. When the process enters into the
system, then this process is put into a job queue. This queue consists of all processes in the
system. The operating system also maintains other queues such as device queue. Device queue
is a queue for which multiple processes are waiting for a particular I/O device. Each device has
its own device queue.
This figure shows the queuing diagram of process scheduling.
Queue is represented by rectangular box.
The circles represent the resources that serve the queues.
The arrows indicate the process flow in the system.

Fig: Queuing-diagram representation of process scheduling.

Queues are of two types


Ready queue - set of all processes residing in main memory, ready and waiting to execute.
Device queue - set of processes waiting for an I/O device.
A newly arrived process is put in the ready queue. Processes waits in ready queue for allocating
the CPU. Once the CPU is assigned to a process, then that process will execute. While executing
the process, any one of the following events can occur.
The process could issue an I/O request and then it would be placed in an I/O queue.
The process could create new sub process and will wait for its termination.
The process could be removed forcibly from the CPU, as a result of interrupt and put back
in the ready queue.

Schedulers
Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types
1. Long Term Scheduler
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue
and loads them into memory for execution. Process loads into the memory for CPU
scheduling. The primary objective of the job scheduler is to provide a balanced mix of
jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average rate
of process creation must be equal to the average departure rate of processes leaving
the system.
2. Short Term Scheduler
It is also called CPU scheduler. Main objective is increasing system performance in
accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects process among the processes that are
ready to execute and allocates CPU to one of them. Short term scheduler also known
as dispatcher, execute most frequently and makes the fine grained decision of which
process to execute next. Short term scheduler is faster than long term scheduler.
3. Medium Term Scheduler
Medium term scheduling is part of the swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium term scheduler is
in-charge of handling the swapped out-processes. Running process may become
suspended if it makes an I/O request. Suspended processes cannot make any progress
towards completion. In this condition, to remove the process from memory and make
space for other process, the suspended process is moved to the secondary storage.
This process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.

Context Switch
When an interrupt occurs, the system needs to save the current context of the process running
on the CPU so that it can restore that context when its processing is done, essentially suspending
the process and then resuming it. The context is represented in the PCB of the process. It
includes the value of the CPU registers, the process state, and memory-management
information. Generically, we perform a state save of the current state of the CPU, be it in kernel
or user mode, and then a state restore to resume operations.
Switching the CPU to another process requires performing a state save of the current process
and a state restore of a different process. This task is known as a context switch. When a context
switch occurs, the kernel saves the context of the old process in its PCB and loads the saved
context of the new process scheduled to run. Context-switch time is pure overhead, because
the system does no useful work while switching. Switching speed varies from machine to
machine, depending on the memory speed, the number of registers that must be copied, and
the existence of special instructions (such as a single instruction to load or store all registers). A
typical speed is a few milliseconds.

Fig: Diagram showing CPU switch from process to process.

Chapter 04 (Threads)
Thread
A thread is a basic unit of CPU utilization. It comprises a thread ID, a program counter, a register
set, and a stack. It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and signals. A traditional
(or heavyweight) process has a single thread of control. If a process has multiple threads of
control, it can perform more than one task at a time. In figure illustrates the difference between
a traditional single-threaded process and a multithreaded process.

Figure: Single-threaded and multithreaded processes.

Example:
A web browser might have one thread display images or text while another thread retrieves
data from the network, for example. A word processor may have a thread for displaying
graphics, another thread for responding to keystrokes from the user, and a third thread for
performing spelling and grammar checking in the background.
A busy web server may have several (perhaps thousands of) clients concurrently accessing it. If
the web server ran as a traditional single-threaded process, it would be able to service only one
client at a time, and a client might have to wait a very long time for its request to be serviced.

If the web-server process is multithreaded, the server will create a separate thread that listens
for client requests. When a request is made, rather than creating another process, the server
creates a new thread to service the request and resume listening for additional requests. This
is illustrated in Figure.

Figure: Multithreaded server architecture.


Typically, RPC (remote procedure call) servers are multithreaded. When a server receives a
message, it services the message using a separate thread. This allows the server to service
several concurrent requests.
Finally, most operating-system kernels are now multithreaded. Several threads operate in the
kernel, and each thread performs a specific task, such as managing devices, managing memory,
or interrupt handling. For example, Solaris has a set of threads in the kernel specifically for
interrupt handling; Linux uses a kernel thread for managing the amount of free memory in the
system.

Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness Multithreading an interactive may allow continued execution if part of
process is blocked or is performing a lengthy operation, especially important for user
interfaces.
2. Resource Sharing Threads share the memory and the resources of the process to which
they belong by default. The benefit of sharing code and data is that it allows an application
to have several different threads of activity within the same address space.
3. Economy Allocating memory and resources for process creation is costly. Cheaper than
process creation, thread switching lower overhead than context switching. In Solaris, for
example, creating a process is about thirty times slower than is creating a thread, and
context switching is about five times slower.
4. Scalability The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing cores. A
single-threaded process can run on only one processor, regardless how many are
available.

Chapter 03 (Process Scheduling)


CPU I/O Burst Cycle
The execution of a process consists of a cycle of CPU execution
and I/O wait.
- A process execution begins with a CPU burst that is
followed by an I/O burst, which is followed by another
CPU burst and so on. The final CPU burst ends with a
system request to terminate the execution.
The CPU burst durations vary greatly from process to process
and from computer to computer.
- An I/O bound program has many short CPU bursts.
- A CPU bound program might have a few long CPU
bursts.

Fig: a CPU and I/O bursts

CPU Scheduler
Whenever, the CPU becomes idle, the OS must select one of the process in the ready queue to
be executed. The selection process is carried out the short term scheduler or CPU scheduler.
The CPU scheduler selects process among the processes that are ready to execute and allocates
CPU to one of them.

Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
4. When a process terminates
Scheduling under 1 and 4 is nonpreemptive.
- Once a process is in the running state, it will continue until it terminates or blocks itself.
Scheduling under 2 and 3 is preemptive.
- Currently running process may be interrupted and moved to the Ready state by OS.
- Allows for better service since any one process cannot monopolize the processor for very
long

Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected by the shortterm scheduler. This function involves the following:
- Switching context
- Switching to user mode
- Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.

Scheduling Criteria
The scheduling criteria include the following:
CPU utilization
- We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range
from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily loaded system).
Throughput
- The number of processes that are completed per time unit, called throughput. For long
processes, this rate may be one process per hour; for short transactions, it may be ten
processes per second.
Turnaround time
- The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time
- The amount of time that a process spends waiting in the ready queue. Waiting time is
the sum of the periods spent waiting in the ready queue.
Response time
- The response time is the time it takes to start responding.

Scheduling Algorithm
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU. There are many different CPU-scheduling algorithms.
First-Come, First-Served Scheduling (FCFS)
- Advantages
Easy to implement.
Minimum overhead.
- Disadvantages
Average waiting is more.
Convoy effect occurs. Even very small process should wait for its turn to
come to utilize the CPU. Short process behind long process results in lower
CPU utilization.
Shortest-Job-First Scheduling (SJFS)
- Advantages
Minimum average waiting time.
SJF algorithm is optimal.
- Disadvantages
Indefinite postponement of some jobs.
Difficulty is knowing the length of the next CPU request.
Priority Scheduling
- Advantage
Good response for the highest priority processes.
- Disadvantage
Starvation or indefinite blocking low priority processes may never execute.
The solution for the Starvation is Aging as time progresses increase the
priority of the process.
Round Robin
- Advantages
Provides fair CPU allocation.
Good response time for short processes.
- Disadvantages
Requires selection of good time slice.
Throughput is low if time quantum is too small.

Load Balancing
Push Migration
Pull Migration

Algorithm Evolution
1.
2.
3.
4.

Deterministic Modeling
Queue Models
Simulations
Implementions.

You might also like