You are on page 1of 6

Operating Systems

Reentrant code is non-self-modifying code; it never changes during execution. Thus, two or more processes can
execute the same code at the same time. Each process has its own copy of registers and data storage to hold the
data for the processs execution. The data for two different processes will, of course, be different.

Belady's Anomaly: (Virtual Memory Concept)


The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm. A FIFO replacement algorithm
associates with each page the time when that page was brought into memory. When a page must be replaced, the
oldest page is chosen.
Notice that, even if we select for replacement a page that is in active use, everything still works correctly. After
we replace an active page with a new one, a fault occurs almost immediately to retrieve the active page. Some
other page will need to be replaced to bring the active page back into memory. Thus, a bad replacement choice
increases the page-fault rate and slows process execution. It does not, however, cause incorrect execution.
This most unexpected result is known as Belady's anomaly: For some page-replacement algorithms, the pagefault rate may increase as the number of allocated frames increases. We would expect that giving more memory
to a process would improve its performance. In some early research, investigators noticed that this assumption
was not always true. Belady's anomaly was discovered as a result.

Binary semaphore: (Process Synchronization Concept)


A cooperating process is one that can affect or be affected by other processes executing in the system.
Cooperating processes can either directly share a logical address space (that is, both code and data) or be allowed
to share data only through files or messages. The former case is achieved through the use of lightweight processes
or threads

The Critical-Section Problem


Consider a system consisting of n processes {P0, P1, P2 ..., Pn-1}. Each process has a segment of code, called a
critical section, in which the process may be changing common variables, updating a table, writing a file, and so
on. The important feature of the system is that, when one process is executing in its critical section, no other
process is to be allowed to execute in its critical section. That is, no two processes are executing in their critical
sections at the same time. The critical-section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section. The section of code implementing
this request is the entry section. The critical section may be followed by an exit section. The remaining code is
the remainder section.

A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their
critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then
only those processes that are not executing in their remainder sections can participate in the decision on which will enter
its critical section next, and this selection cannot be postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is granted.
Two general approaches are used to handle critical sections in operating systems:
(1) Preemptive kernels and (2) Non-Preemptive kernels.
A preemptive kernel allows a process to be preempted while it is running in kernel mode.
A non-preemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will
run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.

Petersons Solution for The Critical-Section Problem

Semaphores
A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard
atomic operations: wait () and signal ().
Wait(S){
While(S<=0)
; //no op
S--;
}

Signal(S){
S++;
}
Operating systems often distinguish between counting and binary semaphores. The value of a counting semaphore can
range over an unrestricted domain. The value of a binary semaphore can range only between 0 and 1. On some systems,
binary semaphores are known as mutex locks, as they are locks that provide mutual exclusion.
We can use binary semaphores to deal with the critical-section problem for multiple processes. The n processes share a
semaphore, mutex, initialized to 1. Each process P, is organized as shown.

Counting semaphores can be used to control access to a given resource consisting of a finite number of instances. The
semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait()
operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal()
operation (incrementing the count). When the count for the semaphore goes to 0, all resources are being used. After that,
processes that wish to use a resource will block until the count becomes greater than 0.

Thrashing: (virtual memory concept)


A process is thrashing if it is spending more time paging than executing.
In fact, look at any process that does not have ''enough" frames. If the process does not have the number of frames
it needs to support pages in active use, it will be a page-fault. At this point, it must replace some page. However,
since all its pages are in active use, it must replace a page that will be needed again right away. Consequently, it
quickly faults again, and again, and again, replacing pages that it must bring back in immediately.

Deadlock and Coffman's conditions that lead to a deadlock.


In a multiprogramming environment, several processes may compete for a finite number of resources. A process
requests resources; and if the resources are not available at that time, the process enters a waiting state.
Sometimes, a waiting process is never again able to change state, because the resources it has requested are held
by other waiting processes. This situation is called a deadlock.
Coffman's conditions that lead to a deadlock
Mutual exclusion: At least one resource must be held in a nonsharable mode; that is, only one process at a time
can use the resource. If another process requests that resource, the requesting process must be delayed until the
resource has been released.
Hold and wait. A process must be holding at least one resource and waiting to acquire additional resources that
are currently being held by other processes.
No preemption. Resources cannot be preempted.; that is, a resource can be released only voluntarily by the
process holding it, after that process has completed its task.
Circular wait. A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is waiting for a resource held
by P1, P1 is waiting for a resource held by P2 and Pn-1 is waiting for a resource held by Pn, and Pn,, is waiting
for a resource held by P0.

Processes
A process is a program in execution.
It also includes the current activity, as represented by the value of the program counter and the contents
of the processor's registers. A process generally also includes the process stack, which contains
temporary data (such as function parameters, return addresses, and local variables), and a data section,
which contains global variables. A process may also include a heap, which is memorythat is dynamically
allocated during process run time.
A program is a passive entity, such as a file containing a list of instructions stored on disk (often called an
executable file), whereas a process is an active entity, with a program counter specifying the next
instruction to execute and a set of associated resources. A program becomes a process when an executable
file is loaded into memory.

Process State: The state of a process is defined in part by the current activity of that process.
New. The process is being created.
Running. Instructions are being executed.
Waiting. The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
Ready. The process is waiting to be assigned to a processor.
Terminated. The process has finished execution.

Each process is represented in the operating system by a process control block


(PCB)also called a task control block.
It contains many pieces of information associated with a specific process, including these:
Process state. The state may be new, ready, running, waiting, halted and so on.
Program counter. The counter indicates the address of the next instruction to be executed for
this process.
CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus
any condition-code information. Along with the program counter, this state information must be
saved when an interrupt occurs, to allow the process to be continued correctly afterward.
CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
Memory-management information. This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on the
memory system used by the operating system.
Accounting information. This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Process Control Block:

Process Scheduling
The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that users can interact with
each program while it is running.
To meet these objectives, the process scheduler selects an available process (possibly from a set of several
available processes) for program execution on the CPU.
Schedulers

A process migrates among the various scheduling queues throughout its lifetime. The operating system must
select, for scheduling purposes, processes from these (ready queue, job queue etc.,) queues in some fashion.
The selection process is carried out by the appropriate scheduler.
The long-term scheduler:
The long-term scheduler, or job scheduler, selects processes from the job pool and loads them into
memory for execution.
It determines which programs are admitted to the system for processing. It controls the degree of
multiprogramming (the number of processes in memory). Once admitted, a job becomes a process.
The long-term scheduler may need to be invoked only when a process leaves the system. Because of the
longer interval between executions, the long-term scheduler can afford to take more time to decide which
process should be selected for execution.
It is important that the long-term scheduler select a good process mix of I/O-bound (I/O-bound process is
one that spends more of its time doing I/O than it spends doing computations) and CPU-bound (using
more of its time doing computations) processes.

The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to
execute and allocates the CPU to one of them.
Limitation:
If all processes are I/O bound, the ready queue will almost always be empty, and the short-term scheduler
will have little to do. If all processes are CPU bound, the I/O waiting queue will almost always be empty,
devices will go unused, and again the system will be unbalanced. The system with the best performance
will thus have a combination of CPU-bound and I/O-bound processes.
The primary distinction between these two schedulers lies in frequency of execution.
The long-term scheduler executes much less frequently;
The short-term scheduler must select a new process for the CPU frequently. Hence short term scheduler
also known as dispatcher.
The long-term scheduler may be absent or minimal in time-sharing systems.
The short-term scheduler is minimal in time-sharing systems.
The medium-term scheduler
The key idea behind a medium-term scheduler is that sometimes it can be advantageous to remove
processes from memory and thus reduce the degree of multiprogramming. Later, the process can be
reintroduced into memory, and its execution can be continued where it left off. This scheme is called
swapping.
The process is swapped out, and is later swapped in, by the medium-term scheduler.
Swapping may be necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
Context Switch

Switching the CPU to another process requires performing a state save of the current process and a state
restore of a different process. This task is known as a context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the
saved context of the new process scheduled to run.

Scheduling Criteria
CPU utilization. We want to keep the CPU as busy as possible.
Throughput. The number of processes that are completed per time unit, called throughput.
Turnaround time. The important criterion is how long it takes to execute that process. The interval from the time of
submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time. Waiting time is the sum of the periods spent waiting in the ready queue.
Response time. In an interactive system, turnaround time may not be the best criterion. The time from the submission of
a request until the first response is produced. This measure, called response time, is the time it takes to start responding.

The process image


The process image represents the current status of the process.
When a program is loaded as a process it is allocated a section of virtual memory which forms its useable address space.
Within this process image there are typically at least four elements:
Program code: The program instructions to be executed.
Program data: May be distinguished as initialized variables including external global and static variables,
uninitialized variables.
Data blocks are not shared between processes by default.
Modifiable part of user space. May include program data, user stack area, and programs that may be
modified.
Stack: A process will commonly have at least two last-in, first-out (LIFO) stacks, including a user stack
for user mode and a kernel stack for kernel mode.
Process Control Block: Info needed by the OS to control processes.

You might also like