You are on page 1of 14

BANSAL INSTITUTE OF SCIENCE

&TECHNOLOGY BHOPAL

Department of Computer Science Engineering

SESSION 2010-2011

SEMINAR/GROUP DISCUSSION

On

“OPERATING SYSTEM”

SUBMITTED TO SUBMITTEDBY:-
PROF. SHAHEEN AYYUB INDRAJEET GOUR

( H.O.D. C.S.E.DEPT.) (0112CS071110)


OPERATING SYSTEM

What is an Operating System

The 1960’s definition of an operating system is “the software that controls the
hardware”. However, today, due to microcode we need a better definition. We see an
operating system as the programs that make the hardware useable. In brief, an
operating system is the set of programs that controls a computer. Some examples of
operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago,
OS/2, MacOS, VMS, MVS, and VM.

Controlling the computer involves software at several levels. We will differentiate


kernel services, library services, and application-level services, all of which are part of
the operating system. Processes run Applications, which are linked together with
libraries that perform standard services. The kernel supports the processes by
providing a path to the peripheral devices. The kernel responds to service calls from
the processes and interrupts from the devices. The core of the operating system is the
kernel, a control program that functions in privileged state (an execution context that
allows all hardware instructions to be executed), reacting to interrupts from external
devices and to service requests and traps from processes. Generally, the kernel is a
permanent resident of the computer. It creates and terminates processes and responds
to their request for service.

Operating Systems are resource managers. The main resource is computer hardware in
the form of processors, storage, input/output devices, communication devices, and
data. Some of the operating system functions are: implementing the user interface,
sharing hardware among users, allowing users to share data among themselves,
preventing users from interfering with one another, scheduling resources among users,
facilitating input/output, recovering from errors, accounting for resource usage,
facilitating parallel operations, organizing data for secure and rapid access, and
handling network communications.

Objectives of Operating Systems

Modern Operating systems generally have following three major goals. Operating
systems generally accomplish these goals by running processes in low privilege and
providing service calls that invoke the operating system kernel in high-privilege state.

 To hide details of hardware by creating abstraction


An abstraction is software that hides lower level details and provides a set of
higher-level functions. An operating system transforms the physical world of
devices, instructions, memory, and time into virtual world that is the result of
abstractions built by the operating system. There are several reasons for
abstraction. 
First, the code needed to control peripheral devices is not standardized.
Operating systems provide subroutines called device drivers that perform
operations on behalf of programs for example, input/output operations. 
Second, the operating system introduces new functions as it abstracts the
hardware. For instance, operating system introduces the file abstraction so that
programs do not have to deal with disks. 
Third, the operating system transforms the computer hardware into multiple
virtual computers, each belonging to a different program. Each program that is
running is called a process. Each process views the hardware through the lens
of abstraction. 
Fourth, the operating system can enforce security through abstraction.

 To allocate resources to processes (Manage resources)


An operating system controls how processes (the active agents) may
access resources (passive entities).

 Provide a pleasant and effective user interface 


The user interacts with the operating systems through the user interface and
usually interested in the “look and feel” of the operating system. The most
important components of the user interface are the command interpreter, the
file system, on-line help, and application integration. The recent trend has been
toward increasingly integrated graphical user interfaces that encompass the
activities of multiple processes on networks of computers.

One can view Operating Systems from two points of views: Resource


manager and Extended machines. Form Resource manager point of view Operating
Systems manage the different parts of the system efficiently and from extended
machines point of view Operating Systems provide a virtual machine to users that is
more convenient to use. The structurally Operating Systems can be design as a
monolithic system, a hierarchy of layers, a virtual machine system, an exokernel, or
using the client-server model. The basic concepts of Operating Systems are processes,
memory management, I/O management, the file systems, and security.
Definition of Process

The notion of process is central to the understanding of operating systems. There are
quite a few definitions presented in the literature, but no "perfect" definition has yet
appeared.

Definition

The term "process" was first used by the designers of the MULTICS in 1960's. Since
then, the term process, used somewhat interchangeably with 'task' or 'job'. The process
has been given many definitions for instance

 A program in Execution.
 An asynchronous activity.
 The 'animated sprit' of a procedure in execution.
 The entity to which processors are assigned.
 The 'dispatchable' unit.

and many more definitions have given. As we can see from above that there is no
universally agreed upon definition, but the definition "Program in Execution" seem to
be most frequently used. And this is a concept are will use in the present study of
operating systems.

Now that we agreed upon the definition of process, the question is what is the relation
between process and program. It is same beast with different name or when this beast
is sleeping (not executing) it is called program and when it is executing becomes
process. Well, to be very precise. Process is not the same as program. In the following
discussion we point out some of the difference between process and program. As we
have mentioned earlier.

Process is not the same as program. A process is more than a program code. A process
is an 'active' entity as oppose to program which consider to be a 'passive' entity. As we
all know that a program is an algorithm expressed in some suitable notation, (e.g.,
programming language). Being a passive, a program is only a part of process. Process,
on the other hand, includes:

 Current value of Program Counter (PC)


 Contents of the processors registers
 Value of the variables
 The process stack (SP) which typically contains temporary data such as
subroutine parameter, return address, and temporary variables.
 A data section that contains global variables.

A process is the unit of work in a system.

In Process model, all software on the computer is organized into a number of


sequential processes. A process includes PC, registers, and variables. Conceptually,
each process has its own virtual CPU. In reality, the CPU switches back and forth
among processes. (The rapid switching back and forth is called multiprogramming).

Process State

The process state consist of everything necessary to resume the process execution if it
is somehow put aside temporarily. The process state consists of at least following:

 Code for the program.


 Program's static data.
 Program's dynamic data.
 Program's procedure call stack.
 Contents of general purpose registers.
 Contents of program counter (PC)
 Contents of program status word (PSW).
 Operating Systems resource in use.

Process Control Block

A process in an operating system is represented by a data structure known as a process


control block (PCB) or process descriptor. The PCB contains important information
about the specific process including

 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
 Unique identification of the process in order to track "which is which"
information.
 A pointer to parent process.
 Similarly, a pointer to child process (if it exists).
 The priority of process (a part of CPU scheduling information).
 Pointers to locate memory of processes.
 A register save area.
 The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information
about a process. Thus, the PCB is the data structure that defines a process to the
operating systems.

Threads

 Threads
 Processes Vs Threads
 Why Threads?
 User-Level Threads
 Kernel-Level Threads
 Advantages of Threads over Multiple Processes
 Disadvantages of Threads over Multiprocesses
 Application that Benefits from Threads
 Application that cannot benefit from Threads
 Resources used in Thread creation and Process Creation
 Context Switch
 Major Steps of Context Switching
 Action of Kernel to Context switch among threads
 Action of kernel to Context switch among processes

Threads

Despite of the fact that a thread must execute in process, the process and its associated
threads are different concept. Processes are used to group resources together and
threads are the entities scheduled for execution on the CPU.
A thread is a single sequence stream within in a process. Because threads have some
of the properties of processes, they are sometimes called lightweight processes. In a
process, threads allow multiple executions of streams. In many respect, threads are
popular way to improve application through parallelism. The CPU switches rapidly
back and forth among the threads giving illusion that the threads are running in
parallel. Like a traditional process i.e., process with one thread, a thread can be in any
of several states (Running, Blocked, Ready or Terminated). Each thread has its own
stack. Since thread will generally call different procedures and thus a different
execution history. This is why thread needs its own stack. An operating system that
has thread facility, the basic unit of CPU utilization is a thread. A thread has or
consists of a program counter (PC), a register set, and a stack space. Threads are not
independent of one other like processes as a result threads shares with other threads
their code section, data section, OS resources  also known as task, such as open files
and signals.

Processes Vs Threads

As we mentioned earlier that in many respect threads operate in the same way as that
of processes. Some of the similarities and differences are:

Similarities

 Like processes threads share CPU and only one thread active (running) at a
time.
 Like processes, threads within a processes, threads within a processes execute
sequentially.
 Like processes, thread can create children.
 And like process, if one thread is blocked, another thread can run.

Differences

 Unlike processes, threads are not independent of one another.


 Unlike processes, all threads can access every address in the task .
 Unlike processes, thread are design to assist one other. Note that processes
might or might not assist one another because processes may originate from
different users.

Why Threads?

Following are some reasons why we use threads in designing operating systems.

1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.

Threads are cheap in the sense that


1. They only need a stack and storage for registers therefore, threads are cheap to
create.
2. Threads use very little resources of an operating system in which they are
working. That is, threads do not need new address space, global data, program
code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we
only have to save and/or restore PC, SP and registers.

But this cheapness does not come free - the biggest drawback is that there is no
protection between threads.

User-Level Threads

User-level threads implement in user-level libraries, rather than via systems calls, so
thread switching does not need to call operating system and to cause interrupt to the
kernel. In fact, the kernel knows nothing about user-level threads and manages them
as if they were single-threaded processes.

Advantages:

The most obvious advantage of this technique is that a user-level threads package can
be implemented on an Operating System that does not support threads. Some other
advantages are

 User-level threads does not require modification to operating systems.


 Simple Representation: 
    Each thread is represented simply by a PC, registers, stack and a small
control block, all stored in the user process address space.
 Simple Management: 
    This simply means that creating a thread, switching between threads and
synchronization between threads can all be done without intervention of the
kernel.
 Fast and Efficient:
    Thread switching is not much more expensive than a procedure call.

Disadvantages:

 There is a lack of coordination between threads and operating system kernel.


Therefore, process as whole gets one time slice irrespect of whether process has
one thread or 1000 threads within. It is up to each thread to relinquish control to
other threads.
 User-level threads requires non-blocking systems call i.e., a multithreaded
kernel. Otherwise, entire process will blocked in the kernel, even if there are
runable threads left in the processes. For example, if one thread causes a page
fault, the process blocks.

Kernel-Level Threads

In this method, the kernel knows about and manages the threads. No runtime system is
needed in this case. Instead of thread table in each process, the kernel has a thread
table that keeps track of all threads in the system. In addition, the kernel also
maintains the traditional process table to keep track of processes. Operating Systems
kernel provides system call to create and manage threads.

The implementation of general structure of kernel-level thread is 


 

Advantages:

 Because kernel has full knowledge of all threads, Scheduler may decide to give
more time to a process having large number of threads than process having
small number of threads.
 Kernel-level threads are especially good for applications that frequently block.

Disadvantages:

 The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
 Since kernel must manage and schedule threads as well as processes. It require
a full thread control block (TCB) for each thread to maintain information about
threads. As a result there is significant overhead and increased in kernel
complexity.

CPU/Process Scheduling

The assignment of physical processors to processes allows processors to accomplish


work. The problem of determining when processors should be assigned and to which
processes is called processor scheduling or CPU scheduling.
When more than one process is runable, the operating system must decide which one
first. The part of the operating system concerned with this decision is called the
scheduler, and algorithm it uses is called the scheduling algorithm.

Goals of Scheduling (objectives)

In this section we try to answer following question: What the scheduler try to achieve?

Many objectives must be considered in the design of a scheduling discipline. In


particular, a scheduler should consider fairness, efficiency, response time, turnaround
time, throughput, etc., Some of these goals depends on the system one is using for
example batch system, interactive system or real-time system, etc. but there are also
some goals that are desirable in all systems.

  

General Goals

Fairness    
        Fairness is important under all circumstances. A scheduler makes sure that each
process gets its fair share of the CPU and no process can suffer indefinite
postponement. Note that giving equivalent or equal time is not fair. Think of safety
control and payroll at a nuclear plant.

Policy Enforcement    
        The scheduler has to make sure that system's policy is enforced. For example, if
the local policy is safety then the safety control processes must be able to run
whenever they want to, even if it means delay in payroll processes.

Efficiency    
        Scheduler should keep the system (or in particular CPU) busy cent percent of the
time when possible. If the CPU and all the Input/Output devices can be kept running
all the time, more work gets done per second than if some components are idle.

Response Time 
        A scheduler should minimize the response time for interactive user.

Turnaround
        A scheduler should minimize the time batch users must wait for an output.
Throughput
        A scheduler should maximize the number of jobs processed per unit time.

A little thought will show that some of these goals are contradictory. It can be shown  
that any scheduling algorithm that favors some class of jobs hurts another class of
jobs. The amount of CPU time available is finite, after all.
 

 Preemptive VsNonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how
they deal with clock interrupts.

Nonpreemptive Scheduling

A scheduling discipline is nonpreemptive if, once a process has been given the CPU,
the CPU cannot be taken away from that process.

Following are some characteristics of nonpreemptive scheduling

1. In nonpreemptive system, short jobs are made to wait by longer jobs but the
overall treatment of all processes is fair.
2. In nonpreemptive system, response times are more predictable because
incoming high priority jobs can not displace waiting jobs.
3. In nonpreemptive scheduling, a schedular executes jobs in the following two
situations.
a. When a process switches from running state to the waiting state.
b. When a process terminates.

Preemptive Scheduling

A scheduling discipline is preemptive if, once a process has been given the CPU can
taken away.

The strategy of allowing processes that are logically runable to be temporarily


suspended is called Preemptive Scheduling and it is contrast to the "run to
completion" method.

 
Deadlock

A set of process is in a deadlock state if each process in the set is waiting for an event
that can be caused by only another process in the set. In other words, each member of
the set of deadlock processes is waiting for a resource that can be released only by a
deadlock process. None of the processes can run, none of them can release any
resources, and none of them can be awakened. It is important to note that the number
of processes and the number and kind of resources possessed and requested are
unimportant.

The resources may be either physical or logical. Examples of physical resources are
Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of logical
resources are Files, Semaphores, and Monitors.

The simplest example of deadlock is where process 1 has been allocated non-
shareable resources A, say, a tap drive, and process 2 has be allocated non-sharable
resource B, say, a printer. Now, if it turns out that process 1 needs resource B (printer)
to proceed and process 2 needs resource A (the tape drive) to proceed and these are the
only two processes in the system, each is blocked the other and all useful work in the
system stops. This situation ifs termed deadlock. The system is in deadlock state
because each process holds a resource being requested by the other process neither
process is willing to release the resource it holds.

Preemptable and Nonpreemptable Resources

Resources come in two flavors: preemptable and nonpreemptable. A preemptable


resource is one that can be taken away from the process with no ill effects. Memory is
an example of a preemptable resource. On the other hand, a nonpreemptable resource
is one that cannot be taken away from process (without causing ill effect). For
example, CD resources are not preemptable at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve preemptable resources.
Deadlocks that involve nonpreemptable resources are difficult to deal with.

Necessary and Sufficient Deadlock Conditions


Coffman (1971) identified four (4) conditions that must hold simultaneously for there
to be a deadlock.

1. Mutual Exclusion Condition


The resources involved are non-shareable.
Explanation: At least one resource (thread) must be held in a non-shareable mode,
that is, only one process at a time claims exclusive control of the resource. If another
process requests that resource, the requesting process must be delayed until the
resource has been released.

2. Hold and Wait Condition


Requesting process hold already, resources while waiting for requested
resources.
Explanation: There must exist a process that is holding a resource already allocated
to it while waiting for additional resource that are currently being held by other
processes.

3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Explanation: Resources cannot be removed from the processes are used to
completion or released voluntarily by the process holding it.
 
4. Circular Wait Condition
The processes in the system form a circular list or chain where each process in
the list is waiting for a resource held by the next process in the list.

As an example, consider the traffic deadlock in the following figure

Consider each section of the street as a resource.


1. Mutual exclusion condition applies, since only one vehicle can be on a section
of the street at a time.
2. Hold-and-wait condition applies, since each vehicle is occupying a section of
the street, and waiting to move on to the next section of the street.
3. No-preemptive condition applies, since a section of the street that is a section of
the street that is occupied by a vehicle cannot be taken away from it.
4. Circular wait condition applies, since each vehicle is waiting on the next
vehicle to move. That is, each vehicle in the traffic is waiting for a section of
street held by the next vehicle in the traffic.

The simple rule to avoid traffic deadlock is that a vehicle should only enter an
intersection if it is assured that it will not have to stop inside the intersection.

It is not possible to have a deadlock involving only one single process. The deadlock
involves a circular “hold-and-wait” condition between two or more processes, so
“one” process cannot hold a resource, yet be waiting for another resource that it is
holding. In addition, deadlock is not possible between two threads in a process,
because it is the process that holds resources, not the thread that is, each thread has
access to the resources held by the process.

You might also like