You are on page 1of 15

CS1603 OPERATING SYSTEMS 3 0 0 100

UNIT I INTRODUCTION 7
Main frame Systems, Desktop Systems – Multiprocessor Systems – Distributed
Systems – Clustered Systems – Real Time systems – Hand held Systems,
Operating Systems Structures: System Components – Operating System Services
- System calls - System Programs – System Design and Implementation - CPU
scheduling: Basic Concepts – Scheduling Algorithms.
UNIT II PROCESS MANAGEMENT 11
Process Concepts - Process Scheduling - Operation on Process - Co-Operating
process - Inter Process Communication - Threads: Multithreading Models -
Process Synchronization: The Critical Section Problem – Synchronization
Hardware - Semaphores – classical problem of Synchronization – Monitors -
Deadlock: Deadlock Characterization - Methods for handling Deadlocks - Deadlock
Prevention – Deadlock Avoidance - Deadlock Detection – Recovery from Deadlock.
UNIT III MEMORY MANAGEMENT 9
Background – Swapping - Contiguous Memory Allocation - Paging - Segmentation
– Segmentation with paging - Virtual Memory: Demand paging - Page
Replacement - Thrashing.

UNIT IV FILE SYSTEMS 9


File Concepts - Access methods - Directory Structure - File Protection - File
System Implementation: File System Structure and Implementation – Directory
Implementation – Allocation methods Free Space Management – Recovery - Disk
Structure – Disk Scheduling.
UNIT V DISTRIBUTED OPERATING SYSTEM 9
Design issues in distributed operating system-Distributed file systems - Naming
and Transparency-Remote File Access-Stateful versus Stateless service –
Distributed Coordination- Event Ordering-Mutual Exclusion- Atomicity-
Concurrency Control- Deadlock Handling-Election Algorithms-Case Study-Linux.
Total No. of Periods: 45

TEXTBOOKS

1. Silberschatz, Galvin, Gagne “ Operating System Concepts” Sixth Edition,


2003
2. Pradeep K.Sinha, “ Distributed OS concepts and Design ”, IEEE computer
Society Press, PHI 1998.

REFERENCES

1. Andrew S. Tanenbaum , “Modern Operating Systems”, PHI , 2nd Edition


2001
2. Achut S. Godbole and Kahate Atul , “Operating Systems & Systems
Programming ”, Tata Mcgraw Hill, 2003.
3. Charles Crowley, “ Operating systems: A Design Oriented Approach”, Tata
McGraw Hill, 1999.

1
GOD IS LOVE

UNIT - I

1. What is an operating system?


An operating system is a program that manages the computer hardware.
It also provides a basis for application program and acts as an intermediary
between a user of a computer and the computer hardware.

2. What are the components of a computer system?


A computer system can be divided into four components namely:
a. the hardware
b. the operating system
c. the application programs
d. the users

3. What is meant by timesharing?


Timesharing (or multitasking) is a logical extension of multiprogramming.
The CPU executes multiple jobs by switching among them, but the switches
occurs so frequently that the users can interact with each program while it is
running.

4. What are the advantages of a time-sharing operating system?


The advantages of a time-sharing operating system are:
a. It allows many users to share the computer simultaneously.
b. The use of virtual memory frees programmers from concern over
memory-storage limitations.

5. When are real-time systems used?


A real-time system is used when rigid time requirements have been placed
on the operation of a processor or the flow of data. It is often used as a
control device in a dedicated application.

6. What are the two types of real-time system?


The two types of real-time systems are:
a. Hard real-time system, which guarantees that critical tasks be
completed on time.
b. Soft real-time system, in which a critical real-time task gets priority
over other tasks, and retains that priority until it completes.

7. Write short notes on command-line interpreter.


Many commands are given to the operating system by control statements.
When a new job is started in a batch system, or when a user logs on to a
time-shared system, a program that reads and interprets control statements
is executed automatically. This program is sometimes called the control-card
interpreter or the command-line interpreter, and is often known as the shell.

8. What are system calls?


System calls provide the interface between a process and the operating
system. These calls are generally available as assembly-language instructions,
and they are usually listed in various manuals used by the assembly-language
programmers.
Certain systems allow system calls to be made directly from a higher level
language program, in which case the calls normally resemble predefines
function or subroutine calls.

2
9. List the different categories of system calls.
System calls can be grouped into five major categories namely:
a. Process control
b. File management
c. Device management
d. Information maintenance
e. Communications

10.What are the two models of communication?


The two models of communication are:
a. Message-passing model: information is exchanged through an
interprocess-communication facility provided by the operating system.
b. Shared-memory model: processes use map memory system calls to
gain access to regions of memory owned by other processes.

11.What are daemon processes?


Processes that stay in the background to handle some activities are called
daemon processes. Eg., processes in background to handle activity such as
email, web pages, news, printing, etc.

12.What is meant by CPU scheduler?


Whenever the CPU becomes idle, the operating system must select one of
the processes in the ready queue to be executed. The selection process is
carried out by the short-term-scheduler (or CPU scheduler).

13.What are the two types of scheduling? Explain.


The two types of scheduling are:
a. Preemptive scheduling
b. Nonpreemptive scheduling
Preemptive scheduling is the process of scheduling in which, a process can
run for a maximum of fixed time. Once the end of time interval is reached, it
is suspended and the next process runs.
Nonpreemptive scheduling is the process of scheduling in which, a process
runs until it completes its operation or it voluntarily releases the CPU.

14.What is a dispatcher?
The dispatcher is the module that gives control of the CPU to the process
selected by the short-term-scheduler. This function involves:
a. Switching context
b. Switching to user mode
c. Jumping to proper location in the user program to restart that
program.

15.What is meant by dispatch latency?


The time taken for the dispatcher to stop one process and start another
running is known as dispatch latency.

16.List the various scheduling criterions.


The scheduling criterions include the following:
a. CPU utilization
b. Throughput
c. Turnaround time
d. Waiting time
e. Response time

3
17.Define throughput.
The number of processes completed per time unit is called throughput.

18.Define turnaround time.


The interval from the time of submission of a process to the time of
completion is the turnaround time.

19.Define waiting time.


Waiting time is the sum of the periods spent waiting in the ready queue.

20.Define response time.


Response time is the amount of time it takes to starts responding.

21.Name some CPU scheduling algorithms.


Some of the CPU scheduling algorithms are as follows:
a. First-Come, First-Served scheduling
b. Shortest-Job-First scheduling
c. Priority scheduling
d. Round-Robin scheduling
e. Multilevel queue scheduling
f. Multilevel feedback queue scheduling

22.Explain First-Come, First-Served scheduling.


In the First-Come, First-Served scheduling, the process that requests the
CPU first is allocated the CPU first. The implementation of the FCFS policy is
easily managed with a FIFO queue. When a process enters the ready queue,
its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then
removed from the queue.

23.How processes are scheduled using Shortest-Job-First scheduling?


Shortest-Job-First scheduling algorithm associates with each process the
length of the latter’s next CPU burst. When the CPU is available, it is assigned
to the process that has the smallest next CPU burst. A more appropriate term
would be the shortest next CPU burst, because the scheduling is done by
examining the length of the next CPU burst of a process, rather than its total
length.

24.Write short notes on priority scheduling.


A priority scheduling is associated with each process, and the CPU is
allocated to the process with the highest priority. Equal-priority processes are
scheduled in FCFS order.

25.What is aging?
Aging is a technique of gradually increasing the priority of processes that
wait in the system for along time.

26.Which scheduling does time-sharing systems use? Explain.


The Round-Robin (RR) scheduling algorithm is designed especially for
time-sharing systems. It is similar to FCFS scheduling, but preemption is
added to switch between processes. A small unit of time, called a time
quantum (or time slice), is defined. A time quantum is generally from 10 to
100 milliseconds. The ready queue is treated as a circular queue. The CPU
scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.

4
27.Explain multilevel queue scheduling.
A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues. The processes are permanently assigned to one
queue, generally based on some property of process, such as memory size,
process priority, or process type. Each queue has its own scheduling
algorithm. For example, separate queue might be used for foreground and
background processes. The foreground queue might be scheduled by an RR
algorithm, while the background queue is scheduled by an FCFS algorithm.

28.How are processes scheduled in multilevel feedback queue


scheduling?
Multilevel feedback queue scheduling, allows a process to move between
queues. The idea is to separate processes with different CPU-burst
characteristics. If a process uses too much CPU time, it will be moved to a
lower-priority queue. This scheme leaves I/O-bound and interactive processes
in the higher-priority queues. Similarly, a process that waits too long in a
lower-priority queue may be moved to a higher-priority queue. This form of
aging prevents starvation.

5
GOD IS LOVE

UNIT - II

1. What is a process?
A process is a program in execution. A process is more than the program
code, which is sometimes known as the text section. It also includes the
current activity, as represented by the value of the program counter and the
contents of the processor’s registers.

2. What is a child process?


A process may create several new processes, via a create-process system
call, during the course of execution. The creating process is called a parent
process, where as the new processes are called the children of that process.

3. What are the various states of process?


As a process executes, it changes state. The state of a process is defined
in part by the current activity of that process. Each process may be in one of
the following states:
a. New: The process being created.
b. Running: Instructions are being executed.
c. Waiting: The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
d. Ready: The process is waiting to be assigned to a processor.
e. Terminated: The process has finished execution.

4. What is a process control block?


Each process is represented in the operating system by a process control
block (PCB) – also called a task control block. It contains many pieces of
information associated with a specific process, including these:
a. Process state
b. Program counter
c. CPU registers
d. CPU-scheduling information
e. Memory-management information
f. Accounting information
g. I/O status information

5. Explain CPU-bound and I/O-bound processes.


CPU-bound processes are those that spend most of their time in computer.
They have long CPU bursts.
I/O-bound processes are those that spend most of their time waiting for
I/O. They have short CPU bursts.

6. What is a ready queue?


The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the ready queue. This queue is
generally stored as a linked list.

7. What is meant by degree of multiprogramming?


The long-term scheduler controls the degree of multiprogramming – the
number of processes in memory.

8. What is context switch?


Switching the CPU to another process requires saving the state of the old
process and loading the saved state for the new process. This task is known
as a context switch.

6
9. What is meant by inter process communication?
IPC provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space. IPC is
particularly useful in a distributed environment where the communicating
processes may reside on different computers connected with a network.

10.Write notes on message-passing system.


The function of a message system is to allow processes to communicate
with one another without the need to resort to shared data. In this scheme,
services are provided as ordinary user processes. That is, the services operate
outside of the kernel. Communication among the user processes is
accomplished through the passing of messages. An IPC facility provides at
least the two operations: send (message) and receive (message).
11.Mention the methods for implementing a link and the send/receive
operations.
The methods for logically implementing a link and send/receive operations
are as follows:
a. Direct or indirect communication
b. Symmetric or asymmetric communication
c. Automatic or explicit buffering
d. Send by copy or send by reference
e. Fixed-size or variable-sized messages

12.Explain direct communication.


With direct communication, each process that wants to communicate must
explicitly name the recipient or sender of the communication. In this scheme,
the send and receive primitives are defined as follows:
a. send(P, message) – Send a message to process P.
b. receive(Q, message) – Receive a message from process Q.

13.Mention the properties of the communication link in direct


communication.
A communication link in direct communication scheme has the following
properties:
a. A link is established automatically between every pair of processes that
want to communicate. The process need to know only each other’s
identity to communicate.
b. A link is associated with exactly two processes.
c. Exactly one link exists between each pair of processes.

14.Write notes on indirect communication.


With indirect communication, the messages are sent to and receive from
mailboxes, or ports. A mailbox can be viewed abstractly as an object into
which messages can be placed by processes and from which messages can be
removed. Each mailbox has a unique identification. In this scheme, a process
can communicate with some other process via a number of different
mailboxes. Two processes can communicate only if they share a mailbox. The
send and receive primitives are defined as follows:
a. send(A, message) – Send a message to mailbox A.
b. receive(A, message) – Receive a message from mailbox A.

7
15.List out the properties of a communication link in indirect
communication scheme.
In indirect communication scheme, a communication link has the following
properties:
a. A link is established between a pair of processes only if both members
of the pair have a shared mailbox.
b. A link may be associated with more than two processes.
c. A number of different links may exist between each pair of
communicating processes, with each link corresponding to one
mailbox.

16.Name the different options for implementing send and receive


primitives.
Communication between processes takes place by calls to send and
receive primitives. There are different design options for implementing each
primitive. Message passing may be either blocking or nonblocking – also
known as synchronous and synchronous.
a. Blocking send: The sending process is blocked until the message is
received by the receiving process or by the mailbox.
b. Nonblocking send: The sending process sends the message and
resumes operation.
c. Blocking receive: The receiver blocks until a message is available.
d. Nonblocking receive: The receiver retrieves either a valid message or a
null.

17.Write notes on buffering in IPC.


Whether the communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue. Basically, such a
queue can be implemented in three ways:
a. Zero capacity: The queue has maximum length 0; thus the link cannot
have any messages waiting in it. In this case, the sender must block
until the recipient receives the message.
b. Bounded capacity: The queue has finite length n; thus, at most n
messages can reside in it. If the queue is not full when a new message
is sent, the latter is placed in the queue (either the message is copied
or a pointer to the message is kept), and the sender can continue
execution without waiting. The link has a finite capacity, however. If
the link is full, the sender must block until space is available in the
queue.
c. Unbounded capacity: The queue has potentially infinite length; thus,
any number of messaged can wait in it. The sender never blocks.

18.What is meant by thread?


A thread, sometimes called a lightweight process (LWP), is a basic unit of
CPU utilization; it comprises a thread ID, a program counter, a register set,
and a stack. It shares with other threads belonging to the same process its
code section, data section, and other operating-system resources, such as
open files and signals.

19.List the types of threading implementation.


The three common types of threading implementation are:
a. Many-to-one model: maps many user-level threads to one kernel
thread.
b. One-to-one model: maps each user thread to a kernel thread.
c. Many-to-many model: multiplexes many user-kernel threads to a
smaller or equal number of kernel threads.

8
20.What is a race condition?
A situation, where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular
order in which the access takes place, is called a race condition.

21.Define a critical section.


The part of the program, where the shared memory is accessed is called
as the critical region or critical section.

22.What is meant by mutual exclusion?


Mutual exclusion is a way of making sure that, if one process is using a
shared variable or file, the other processes will be excluded from using the
same variable or file.

23.What is a semaphore?
A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait and signal. These
operations were originally termed P (for wait; from the Dutch proberen, to
test) and V (for signal; from verhogen, to increment).

24.What is a monitor?
A monitor is a concurrency construct that contains both the data and
procedures needed to perform allocation of a particular, serially reusable
shared resources or group of serially reusable shared resources.

25.What is meant by deadlock?


A process requests resources; if the resources are not available at that
time, the process enters a wait state. Waiting processes may never again
change state, because the resources they have requested are held by other
waiting processes. This situation is called a deadlock.

26.Explain the necessary conditions for deadlock.


A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
a. Mutual exclusion: At least one resource must be held in a non-sharable
mode; that is only one process at a time can use the resource. If
another process requests that resource, the requesting process must
be delayed until the resource has been released.
b. Hold and wait: A process must be holding at lease one resource and
waiting to acquire additional resources that are currently being held by
other processes.
c. No preemption: Resources cannot be preempted; that is, a resource
can be released only voluntarily by the process holding it, after that
process has completed its task.
d. Circular wait: A set {P0, P1, . . . , Pn} of waiting processes must exist
such that P0 is waiting for resource that is held by P1, P1 is waiting for
resource that is held by P2, . . . , Pn-1 is waiting for a resource that is
held by Pn, and Pn is waiting for a resource that is held by P0.

9
27.What is a resource-allocation graph?
Deadlock can be described more precisely in terms of a directed graph
called a system resource-allocation graph. This graph consists of a set of
vertices V and a set of edges E. The set of vertices V is partitioned into two
different types of nodes P={P1, P2, . . . , Pn}, the set consisting of all the
active processes in the system, and R={R1, R2, . . . , Rm}, the set of all
resource types in the system.
A directed edge PiRj is called a request edge; a directed edge R iPi is
called an assignment edge.

28.How can we handle deadlock?


Deadlock can be handled in one of the following ways:
a. Protocols can be used to prevent or avoid deadlocks, ensuring that the
system will never enter a deadlock state.
b. The system can be allowed to enter a deadlock state which can be
detected and recovered.
c. The problem can be ignored and pretending that deadlocks never
occurs in the system.

29.What is meant by deadlock prevention?


Deadlock prevention is a set of methods for ensuring that at least one of
the necessary conditions cannot hold.

30.Write notes on deadlock avoidance.


Deadlock avoidance; require that the operating system be given in
advance additional information concerning which resources a process will
request and use during its lifetime.

31.When is a system in a safe state?


A state is safe if the system can allocate resources to each process (up to
its maximum) in some order and still avoid a deadlock. More formally, a
system is in a safe state if there exists a safe sequence. A sequence of
processes <P1, P2, . . . , , Pn> is a safe sequence for the current allocation
state if, for each Pi, the resources that Pi can still request can be satisfied by
the currently available resources plus the resources held by all the Pj, with j<i.

32.List the data structures used in banker’s algorithm.


Banker’s algorithm uses the following data structures:
a. Available: A vector of length m indicates the number of available
resources of each type. If Available[j] = k, there are k instances of
resource type Rj available.
b. Max: An n x m matrix defines the maximum demand of each process.
If Max[i,j] = k, then the process Pi may request al most k instances of
resource type Rj.
c. Allocation: An n x m matrix defines the number of resources of each
type currently allocated to each process. If Allocation[i,j] = k, then
process Pi is currently allocated k instances of resource type Rj.
d. Need: An n x m matrix indicates the remaining resource need of each
process. If Need[i,j] = k, then process Pi may need k more instances of
resource type Rj to complete its task.

33.What should the system do when a deadlock occurs?


When a deadlock situation occurs, the system must provide the following:
a. An algorithm that examines the state of the system to determine
whether a deadlock has occurred.
b. An algorithm to recover from the deadlock.

10
34.How can the system recover from deadlock?
There are two options for breaking a deadlock. They are:
a. Abort one or more processes to break the circular wait.
b. Preempt some resources from one or more of the deadlocked
processes.
35.What are the ways of terminating a process during deadlocks?
To eliminate deadlocks by aborting a process, the following two methods
can be used:
a. Abort all deadlocked processes.
b. Abort one process at a time until the deadlock cycle is eliminated.

36.List the issues that needed to be addresses, when preemption is used


to deal with deadlocks.
If preemption is required to deal with deadlock, then three issues need to
be addressed:
a. Selecting a victim
b. Rollback
c. Starvation

37.What are the ways of terminating a process during deadlocks?


To eliminate deadlocks by aborting a process, the following two methods
can be used.

11
GOD IS LOVE

UNIT - III

1. Differentiate between logical address space and physical address


space.
An address generated by the CPU is commonly referred to as a logical
address whereas an address seen by the memory unit – that is, the one
loaded into the memory-address register of the memory – is commonly
referred to as a physical address.
The set of all logical addresses generated by a program is a logical-
address space; the set of all physical addresses corresponding to these logical
addresses is a physical address space.

2. What is a memory-management unit?


The run-time mapping from virtual to physical address is done by a
hardware device called the memory-management unit.

3. Write short notes on dynamic loading.


To obtain better memory-space utilization, dynamic loading can be used. A
routine is not loaded until it is called. The advantage of dynamic loading is
that an unused routine is never loaded. This method is particularly useful
when large amount of code are needed to handle infrequently occurring cases,
such as error routines.

4. What is the use of overlays?


To enable a process to be larger than the amount of memory allocated to
it, overlays can be used. The idea of overlays is to keep in memory only those
instructions and data that are needed at any given time.

5. Define swapping.
A process need to be in memory to be executed. A process, can be
swapped temporarily out of memory to a backing store, and then brought
back into memory for continued execution. This technique is known as
swapping.

6. Explain roll out, roll in.


If a higher-priority process arrived and wants service, the memory
manager can swap out the lower-priority process so that it can load and
execute the higher-priority process. When the higher-priority process finishes,
the lower-priority process can be swapped back in and continued. This variant
of swapping is sometimes called roll out, rollin.

7. Write notes on contiguous storage allocation.


The memory is usually divided into two partitions: one for resident
operating system, and one for the user processes. The operating system may
be placed in either low or high memory. The major factor affecting this
decision is the location of interrupt vector. Since the interrupt vector is often
in low memory, programmers usually place the operating system in low
memory as well. In this contiguous memory allocation, each process is
contained in a single contiguous section of memory.

12
8. What is the purpose of relocation register and limit register?
The relocation register contains the value of the smallest physical address;
the limit register contains the range of logical addresses (for example,
relocation = 100040 and limit = 74600). With relocation and limit registers,
each logical address must be less than the limit register; the MMU maps the
logical address dynamically by adding the value in the relocation register. This
mapped address is sent to memory.

9. Explain multiple-partition method.


In multiple-partition method, when a partition is free, a process is selected
from the input queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process.

10.What is dynamic storage allocation problem? How can it be solved?


Dynamic storage allocation problem is how to satisfy a request of size n
from a list of free holes.
There are many solutions to this problem. The set of holes is searched to
determine which hole is best to allocate. The first-fit, best-fit, and worst-fit
strategies are the most common ones used to select free hole from the set of
available holes.

11.Write notes on first-fit, best-fit and worst-fit methods.


a. First-fit: Allocate the first hole that is big enough. Searching can start
either at the beginning of the set of holes or where the previous first-fit
search ended.
b. Best-fit: Allocate the smallest hole that is big enough. Search the
entire list, unless the list is kept ordered by size. This strategy
produces the smallest leftover hole.
c. Worst-fit: Allocate the largest hole. Search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole, which
may be more useful than the smaller leftover hole from a best-fit
approach.

12.What is internal fragmentation?


Memory that is internal to a partition but is not being used results in
internal fragmentation.

13.Explain paging technique.


Paging is a memory-management scheme that permits the physical-
address space of a process to be noncontiguous. Paging avoids considerable
problem of fitting the varying-sized memory chunks onto the backing store,
from which most of the previous memory-management schemes suffered.

14.Define frames.
Physical memory is broken into fixed-sized blocks called frames.

15.Define pages.
Logical memory is broken into blocks of the same size called pages.

16.Define frame table.


The operating system is managing physical memory, it must be ware of
the allocation details of physical memory; which frames are allocated, which
frames are available, how many total frames there are, and so on. This
information is generally kept in a data structure called a frame table.

13
17.Explain the term page-table base register.
The page table is kept in main memory, and a page-table base register
(PTBR) points to the page table.

18.Explain the term Translation look–aside buffer.


The TLB is associative, high-speed memory. Each entry in the TLB consists
of two parts: a key (or tag) and a value. When the associative memory
presented with an item, it is compared with all keys simultaneously. If the
item is found, the corresponding value field is returned. The search is fast; the
hardware is expensive. Typically, the number of entries in a TLB is small, often
numbering between 64, and 1024.

19.What is a hit ratio?


The percentage of times that a particular page number is found in the TLB
is called the hit ratio.

20.Explain segmentation.
The logical-address space is a collection of segments. The process of
mapping the logical address space to the physical address space using a
segment table is known as segmentation.

21.Write notes on virtual memory.


Virtual memory is a technique that allows the execution of processes that
may not be completely in memory. One major advantage of this scheme is
that programs can be larger than physical memory.

22.What is a lazy swapper?


A lazy swapper never swaps a page into memory unless that page will be
needed.

23.Explain page replacement approach.


Page replacement uses the following approach:
a. Find the location of the desired page on the disk.
b. Find a free frame:
i. If there is a free frame, use it.
ii. If there is no free frame, use a page replacement algorithm to
select a victim frame.
iii. Write the victim page to the disk; change the page and frame
tables accordingly.
c. Read the desired page into the (newly) free frame; change the page
and frame tables.
d. Restart the user process.

24.What is a reference string?


An algorithm is evaluated by running it on a particular string of memory
references and computing the number of page faults. The string of memory
reference is called a reference string.

25.List the various page replacement algorithms.


Page replacement algorithms include:
a. FIFO page replacement
b. Optimal page replacement
c. LRU page replacement
d. LRU approximation page replacement
e. Counting-based page replacement

14
26.What is a process?
A process is a program in execution. A process is more than the program
code, which is sometimes known as the text section. It also includes the
current activity, as represented by the value of the program counter and the
contents of the processor’s registers.

27.How does FIFO page replacement work?


A FIFO page replacement algorithm associates with each page the time
when that page was brought into memory. When a page must be replaced,
the oldest page is chosen.

28.State Belady’s anomaly.


Belady’s anomaly states that for some page replacement algorithms, the
page-fault rate may increase as the number of allocated frames increases.

29.Explain lease-recently-used algorithm.


LRU replacement associates with each page the time of that page’s last
use. When a page must be replaced, LRU chooses that page that has not been
used for the longest period of time.

30.Describe least frequently used page-replacement.


The least frequently used (LFU) page-replacement algorithm requires that
the page with the smallest count be replaced.

31.How does most frequently used page-replacement work?


The most frequently used (MFU) page-replacement algorithm is based on
the argument that the page with the smallest count was probably just brought
in and has yet to be used.

32.What is a meant by thrashing?


The process is brought in and taken out of the memory and is not allowed to
execute. This technique is known as thrashing.

15

You might also like