You are on page 1of 6

Kings College London

OSC

Operating Systems & Concurrency


solutions by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

Name three requirements of solutions to the critical section problem. (2013, Q1)
1. Mutual Exclusion
2. Freedom from Deadlock
3. Freedom from Starvation
What are the essential data structures and operations of a semaphore? (2013, Q1)
Data Structures:
Operations:
wait() or acquire()
A counter, which holds the number of permits
signal() or release()
A set or queue of blocked processes/threads
What is starvation? (2013, Q1)
When a process does not eventually run due to an interleaving that causes it to wait forever while
other processes are run, that process is starved. In a starvation-free solution, every process must
eventually be run.
What is a thread? (2013, Q2)
A thread is a lightweight process, and a basic unit of CPU utilisation.
What is the effect of the keyword synchronized? (2013, Q2)
For each instance of an object, it allows only one thread to access or use the synchronised method
at a given time.
What does the keyword synchronized aim to prevent? (2013, Q2)
It aims to prevent multiple threads from accessing the same data at a given time, which could
prevent data loss among other problems.
Does using synchronized methods instead of semaphores prevent deadlock from
occurring? (2013, Q2)
Yes. If each synchronised method in one instance of a class contains the critical section, where
only one thread is allowed to access it at a time, then the solution meets the requirement of
freedom from deadlock.
State four distinct conditions that must hold simultaneously for deadlock to be possible to
occur in a system. (2012Resit, Q2)
In order for deadlock to be possible, the Coffman conditions must hold:
Mutual Exclusion: only one process at a time can use a resource
Hold & Wait: a process holding one resource is waiting to acquire resources held by another
process
No Preemption: a process can only release a resources voluntarily after its done using it
Circular Wait: there exists a set {P0, P1, , Pn} of waiting processes such that:
P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2,
,
Pn1 is waiting for a resource that is held by Pn
Pn is waiting for a resource that is held by P0.

by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

Kings College London

OSC

A program makes three system calls: one to open a file, one to read a byte from it, and one
to close a file. (2013, Q3)
i) How many times will the mode bit change from 0 to 1 during this process?
It will change from 0 to 1 three times.
ii) How can the CPUs registers be used to pass parameters to system calls and why might
this cause problems if the system calls take many parameters? Propose an alternative
method of passing parameters that avoids this problem.
The registers are fast memory locations stored on the CPU. Parameters are stored in the registers
for the system calls to refer to them. However, usually theres a small and finite number of
registers, and the system call itself needs its own register. There may be more parameters than
there are registers. The alternative is to push parameters onto the stack.
iii) Code using system calls is not portable between different operating systems. How does
an API instead of system calls address this issue?
An API will allow the programmer to call the same system call across multiple operating systems
using a keyword that would translate into the appropriate command for the operating system he is
using.
What is a virtual machine, and how does it work? (2013, Q3 & 2012Resit, Q4)
A virtual machine is a software package that is able to run operating systems by simulates
software-level resources such as memory and CPU power to the virtual machine to run on a host
OS.
Give two advantages of using virtual machines for consolidation. Justify your answers.
(2012Resit, Q4)
1) Security, as if a virtual machine is infiltrated others are difficult to infiltrate
2) Scalability, since its a lot easier to and limit manipulate the resources across different VMs.
Which would be more robust to security attacks: a system where each website hosted by its
own web server, running on its own VM; or a system where one web server, running on a
non-VM, hosts all the websites? (2013, Q3)
The first system, as if a website is infiltrated, the only data and information that would be affected is
the information and data on that VM. A fault or security exploit on one website does not
compromise the whole set of websites.
What is the difference between logical and physical memory addresses? How can the use
of logical memory addresses allow code to be position independent? (2013, Q4 &
2014Mock, Q3)
A physical memory address is how the CPU sees the address. It points to an actual location in
physical memory. A logical address points to al location in a programs address space that must be
mapped to real memory. Using a base and limit system allows a program to be position
independent.
What is external fragmentation? (2013, Q4)
External fragmentation occurs when there is enough free memory in total to meet a memory
allocation, but there does not exist a free memory block large enough to meet the said allocation.
Considering a paging system with page tables stored in memory, what is internal
fragmentation and why does it arise when using fixed-size memory pages? (2012Resit, Q3)
Internal fragmentation occurs when a memory allocation request receives more memory than it
needs because the remaining memory would not be enough to fit a new memory control block.
Assuming memory pages are fixed-sized, if a memory allocation request arises and it requires less
than the size of the memory page, it will be given the whole page, and that is an example of
internal fragmentation.
by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

Kings College London

OSC

Considering the following memory layout, within one segment:


| main program code | data | library |
In what way is this segment order vulnerable to a buffer overflow attack? How would
segmentation with execute only and read only bits provide some protection? (2013, Q4)
A buffer overflow attack could fill the data section and go into the library section. Having read-only
bits could prevent attackers from going into the library

Name five types of information maintained by a process control block. (2012Resit, Q1 &
2014Mock, Q1)
1) Registers
2) Process Number
more: memory limits
3) Program Counter
CPU/real time used
4) Process State
information about the scheduling
5) List of Open Files
time limits
How are process control blocks (PCBs) used in a CPU switch? (2012Resit, Q1)
The information in PCBs may be used by the CPU to choose the next process to run.
What is a Translation Look-Aside Buffer (TLB) and how can it be used to reduce the accesstime overheads of using paged memory? How does the specialised memory used for TLBs
differ to conventional memory? (2012Resit, Q3)
A translation look-aside buffer is a cache of the page table that contains the most recently
accessed pages. It reduces overheads since there is no need to access the page table again to
find something in memory. TLBs use associative memory implemented in the hardware that is
much faster than conventional memory.

Considering the page table on the right, what are


the physical memory addresses for each of the
following logical addresses? Note any that are
invalid. (2012Resit, Q3)
(a)
(b)
(c)
(d)
(e)

3,15
0,51
1,4096
0,1
2,101

(a) 20495
(b) invalid
(c) invalid
(d) 16385
(e) 4197

Page

Base Address

Length

16384

400

4096

4096

32768

810

20480

1024

Pseudocode to solving this question, using (x, y) as an example:


Look at the first value in the pair. Here, it is x. This is the page number.
Look at the second value in the pair. Here, it is y.
IF y ([the length in page x] 1), then it is invalid.
ELSE add y to the Base Address, and that is the logical address.
What is the role of the mode bit and how does it change throughout the process of
handling a software interrupt? (2012Resit, Q4)
The mode bit signifies the administrative level of the process being run, i.e. whether the process is
run in kernel mode or user mode. The way it changes is that the current mode bit is trapped and
stored somewhere, the mode bit is switched to 0 for the kernel mode to run its process, and once
the interrupt ends it returns the mode bit to whatever it was previously.
What is a processs working set? (2012Resit, Q4)
The working set represents the areas of memory that the process is currently using.
by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

Kings College London

OSC

What is thrashing? What is its effect upon the amount of useful work performed on the
CPU, and why? (2012Resit, Q4)
If the sum of processes working sets exceeds available memory, page faults are more frequent. As
processes are waiting, the OS switches between them, triggering even more page faults. As a
result, the CPU isnt utilised as much, and this is called thrashing.
What is context switching? What steps does it involve? State one advantage and one
disadvantage of context switching. (2014Mock, Q1)
When the OS changes which process is running, its called a context switch. The old (leaving)
process has its register values and other details moved into a PCB, which is then stored. The new
(arriving) process has its register values copied from the PCB into the registers and then is started.
This process is an overhead, and takes time in which the CPU does nothing useful. However, it
allows multiple processes to share the CPU.
What are the two phases of scheduling? (~)
Long term scheduling, which is infrequent, and takes a process from the new state to the ready
state; and short term scheduling, which is more frequent, and takes a process from the ready state
and starts it running.
Briefly define the following, stating what information each of them contains: a program, a
process, and a process control block. (2014Mock, Q1)
A program is an unchanging executable set of instructions.
A process is a program at some point of its execution.
A process control block is maintained by the operating system, and contains a process state, a
process number, a program counter, memory limits, registers etc.
Describe the functions wait() and signal() in a busy-wait semaphore, and explain how they
can be used to solve the critical section problem. (2014Mock, Q2)
The wait() function a thread to request a permit from the semaphore. If the semaphores integer
value is 0, which means it has no permits, a spin-lock is triggered so that the thread waits until the
semaphore has permits. Once the semaphore has a permit, the thread is allowed to enter the
critical section and the number of permits in the semaphore is decremented. Once it exits the
critical section, the thread calls signal() which increments the value and returns the permit. This
solves the critical section problem as it only allows one thread to access the critical section at a
time, assuming the semaphores value integer is 1.
Why might a busy-wait semaphore be an inefficient way (in terms of CPU utilisation) to
solve the critical section problem? (2014Mock, Q2)
When the spin-lock is triggered, the CPU keeps running the while loop indefinitely.
Explain how a blocked-queue semaphore differs from a busy-wait semaphore (including
changes to the semaphore functions) and why these difference avoid the above efficiency
problems that busy-wait semaphores suffer from. (2014Mock, Q2)
The wait() function is modified to add a parameter and become wait(s), where s is a thread that
would be added to a queue of waiting threads. Whenever a thread returns a permit to the
semaphore, we take the head of the queue. Since we use a queue instead of a spin-lock to keep
the threads waiting, this makes blocked-queue semaphore much more efficient. We also achieve
the bonus of eliminating starvation, as since the threads are in a queue they will eventually get run.
In the context of memory management, what is address binding? (2014Mock, Q3)
Address binding is the process of mapping the program's logical or virtual addresses to
corresponding physical or main memory addresses

by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

Kings College London

OSC

A process running on a modern desktop needs to load a file from disk into memory. To do
this, it uses system calls. (2014Mock, Q3)
1) What will the mode bit on the CPU be set to when executing the code of the process
itself? How about the code that accesses the disk?
2) How can the use of Direct Memory Access by the OS allow the file to be loaded more
quickly?
3) To try to speed up disk access even further, a programmer makes a new OS in which all
processes can access the disk directly without needing to use system calls. Why is this
a bad idea on a multi-user system?
1) The mode bit mode bit will be 1 when the CPU is executing the code of the process itself, but 0
when the code accesses the disk.
2) The loading is handled by a control unit on memory instead of the CPU. This also leaves the
CPU free to do something else.
3) Although this is efficient in terms of speed as it reduces context switching, this is a bad idea
since there will be no permission-based system to allow users to have different access rights to
different directories and files.
What is a memory control block? Give an example of two pieces of data that need to be
stored in each MCB. (2014Mock, Q3)
A memory control block stores information about each region (hole) of memory, and stores the size
of the hole and whether its free or in use.
When using memory paging, fetching data from memory requires 2 memory accesses: oone
into the page table to find the frame of memory containing the page, and one into that frame
to fetch the actual data. If a memory access takes 100ns, how long does it take to fetch data
from memory? (2014Mock, Q4)
Two accesses: 100ns + 100ns = 200ns
Whats a TLB hit and a TLB miss? (2014Mock, Q4)
A TLB hit occurs if we try accessing a page and find it in the TLB. A TLB miss occurs if we dont.
If a memory access takes 100ns, the TLB lookup time is 1ns, and 90% of memory accesses
lead to a TLB hit, what is the effective access time when using a TLB? (2014Mock, Q4)
1ns + [(90/100) 100ns)] + [(10/100) 200ns] = 111ns
What is the optimal page replacement algorithm? Why cant we use it? (2014Mock, Q4)
The optimal page replacement algorithm when finding page faults is choosing the page that will be
called the latest, as in the farthest away from the current page fault, as a victim. We cant use it
because we cannot predict the future.

by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

Kings College London

OSC

Other Notes, Topics & Answers


Page fault replacement algorithms come up in 2013, 2012Resit.
Memory partition allocation algorithms come up in 2014Mock, 2012Resit.
2012Resit Q2b(iv) answer:
boolean transfer(Account from, Account to, int amount) {

if(from.getIBAN().equals(to.getIBAN())) {


return(true);

}

try {


boolean fromFirst;


int c = from.getIBAN().compare(to.getIBAN());


if(c > 0) {



fromFirst = true;



from.getMutex().acquire();



to.getMutex().acquire();


} else {



fromFirst = false;



to.getMutex().acquire();



from.getMutex().acquire();


}

} catch(InterruptedException e) {


return(false);

}

from.withdraw(amount);

to.deposit(amount);

if(fromFirst) {


from.getMutex().release();


to.getMutex().release();

} else {


to.getMutex().release();


from.getMutex().release();

}
}

by Ainur Makhmet, Fares Alaboud, George Raduta, Mark Ormesher, Mustafa Al-Bassam & Navi Parmar

You might also like