You are on page 1of 39

Section 1: Operating system runs in kernel mode and the rest of the system runs in user mode.

Multiplexing (sharing the resources together) In two different ways o In time and in space When resource is time multiplexed different program or users take turn using it Two jobs that os has : Resource manager : Managing hardware resources Modern os allow several programs to run in parallel The os has to provide an orderly and controlled allocation of the processors memory Extended machine : Providing application programmers with a clean set of resources instead of messy hardware Multithreading : What it does is allow the cpu to hold the state of two different threads and then switch back and forth on a nanosecond time for example if one of the processes needs to read a word from memory which takes many clock cycles a multithreaded cpu can just switch to another thread Note : Multithread is not offer true parallelism only one process at a time is running but thread switching time is reduced to the order of nanosecond Thread is kind of light weight process which in turn is is a running program Processes : Process is basically a program in execution Associated with each process s its address space a list of memory locations from 0 to some maximum which the process can read and write

Page 1 of 39

Sometimes os will suspend the process temporarily all the information about each process other than the contents of its own address space is stored in an operating system table called the process table Interprocess communication : Related process that are cooperating to get some job done often need to communicate with one another and synchronize their activities this communication is called interprocess communication Address space : Every computer has some main memory that is uses to hold executing programs Usually multiple programs are allows to be in the memory at the same time the os has to keep them from interfering with each other Every process has some set of addresses tthat it can use If the amount is small it fints into the main memory If its larg then virtual memory has to be used where the address space is decupled from the machines physical memory Files : The files irachy are organized as trees with directories at nodes and files as leaves The path of the files specifies the path from the root directory to the file a path is list ot directory names layer structure : Here os is organized in a hierarchy of layers each one constructed upon the one below it Advantage: Layer I can use services from layer i+1 For example : layer 0 schedules the processes, above the layer the system consists of sequential processes . Microkernel :
Kernel is as small as possible This is used for systems that have to be highly reliable . its often used in realtime systems The work of the os is split into well defined modules : The microkernel which runs in the kernel mode And the rest of the service processes which runs as normal processes in user mode Page 2 of 39

Client and server mode : Communication between clients and servers is often done via message passing To obtain the service the client sends a message to the server saying what it wants The service works on the request and sends back the answer The cpu has a set of regesters: Program counter It contains the memory address of next instructure Stack pointer : Points to the top of the current stack memory The pws: Program status word : contains control bits for the cpu it used in system calls show if the kernel mode is active or not In the kernel mode the cpu can perform any instruction and use every hardware feature However in user mode permits only a subset of the instructions For example : Setting the bit of the pws to kernel mode is only permitted in kernel mode Trap: The trap instruction switches from user mode into kernel mode Kernel : Is the bridge between application and the actual data processing done at the hardware level Also kernel responsiblitye is include managing the systems resources which is communicating between hardware and softwears . System calls: Trap and use in uperating system : A trap instruction switches the execution mode of a cpu from user mode to kernel mode. This instruction allows a user program to invoke functions in the operating system kernel Why process table needed in a timesharing system ? Page 3 of 39 Trap is caused by the program and is synchronous with it . if the program is run again and again the trap will always occure at exactly same position in instructor However intrupt is caused by an external event and its timing is not reproducible

Key diffrents between trap and interrupt :

The process table is needed to store the state of process that is currently suspended this process could be ready or blocked Process table is no need in single process system because the single process is never suspended `

Propose of system call in an operating system : A system call allows a user process to access and execute operating system function inside the kernel User program use system calls to invoke operating system services

Section 2 Process and thereads : The most centeral concept in any operating system is the process Multiprograming : rapid swiching of cpu back and forth from one process to another process is multiprogramming four principal events that couse process to be created : system initialization : o when system booted usually several process are started some intract with human some are bg execution of process creating a system call running process can issue system calls to create processes helping him fetching data from a disk and processing data can be done by two process a user request : o To create a new job like starting a program Four types of exit condition : o Normal exit:
Page 4 of 39

o When the work is finished this is done by system call o Error exit : o For example pop-up menue for having non existing progranm o Fatal error o Divide a number by 0 o Killed by another process o Kill function in linux There are 3 process states : o Running o Process using the cpu o Ready o Job is wating for cpu o Blocked o Job is blocked and it waits for something like an input Interrupt vector: o Is the memory address of interrupt handler or an index into an array called interrupt vector table that contains the memory addresses of interrupt handlers.when an intrupt is generated the os saves the state via context switch . Register and memory map is loaded from the process table process is : an activity of some kind it has program input out put and a state . Asingle processor may be shared among several processeswith some scheduling algorithm being used to determine when to stop work on one process and service a different one. Daemons : process that stay in the background to handle some activity such as email Web pages news printing and so on are called daemons .
Page 5 of 39

what the process does is execute a system call to create the new process this system call tells the operating system to create a new process and indicates directly or indirectly which program to run in it. In both operating system after a process is created the parent and child have thir own distinct address spces.If either process changes a word in its address space the change is not visible to the other process. Process is an independent entity with its own program counter and internal state When process is blockes it does so because logically it cannot continue typically because it is wating for input that is not yet available . Its possible for process that is conceptually ready and able to run to be stopped because the operating system has decided to allocate the cpu to another process for a while . A process could be in these three steps running ready blocked

in the ready state there is no cpu available for it . the third state is different from two in that the process cannot run. To implement the process model the operating system maintains a table an arrya of structures called the process table Process table have information about: process state Program counter Stack pointer

Multiprogramming lets process use the cpu when it would otherwise become idle Threads:
Page 6 of 39

in traditional operating system each process has an address space and single thread of control , thread is sequential execution stream of instrucntions

Process vs threads : lighter weight easyer to create and destroy process have separate address spaces wheres threads share their address space context switching is faster in threads than process threads are great in systems with multiple cpus threads enable parallelism

In the example of word document when we use three threads instead of three processes they share a common memory and thus all have access to the document being edited

The classical thread model: one way of looking at process is that it is a way to grouping related resources together A process has: o an address space containing program text and data as well as other resources o These resource may include : thread has : o program counter
Page 7 of 39

open files child process pending alarms signal handlers accounting information

o stack o regesters the thread has program counter that keeps track of which instruction to execute next it has registers which hold its current working variables it has stack which contains the execution history with one frame for each procedure called but not yet returned form. Processes : o are used to group resources together ; t threads: o are the entities scheduled for execution on the CPU having multiple threads running in parallel in one process is analougous to having multiple processes running in parallel in one computer . threads can share there resources however process they dont have this ability because threads have some of the properties of processes they are sometimes called lightweight processes. Multithreading : o allowing multiple threads in the same process. When we have different threads on one process : All the threads have exactly the same address space which means that they also share the same global vairalbes Since every thread can access every memory address within the process address space one thread can read write or even wipeout an other threads stack What is private to threads : o Program counter , register , stack and state o Each threads has it own stack
Page 8 of 39

When multithreading is present processes normally start with single thread present . this thread has the ability to create new threads by calling a library procedure . Its no neccerary or possible to specifiy anything about address space. Problems with threads: What happens if a process calls fork? Parent and child example Another class of problems is related to the fact that threads share many data structures. o What happens if one thread closes a file while another one is still reading from it? Posix threads: each one has an identifier a set of registers including the program counter and a set of attributes which are stored in structure o the attributs include the stack size o scheduling parameters there are two ways of implement a threads package: user space in the kernel

user space: the kernel knows nothing about them as far as the kernel is concerned it is managing ordinary single threaded process . Each process has an own thread table to keep track of the threads .the runtime system has to do the scheduling

Advantage of implementing threads in users space :

Page 9 of 39

a user level threads package can be implemented on an operating system that does no support threads. They allow each process to have its own customized scheduling algorithm Very fast if the machine has instruction to save regester

Disadvantage of implementing threads in users kernal: If thread makes a blocking system call then the process will be block Therefore parallelism will be gone Scheduler has to work without clock interrupts

Of of main goals of threads: Allow each one to use blocking calls but prevent on blocked thread from affecting others. If a thread starts running no other thread in that process will ever run unless the first thread voluntairliy gives up the cpu.

Page fault: If the program calls or jumps to an instruction that is not in memory a page fault occurs and the operating system will go and get the missing instruction and its neighbors from disk . Is a trap to the software raused by the hardware when a program accesses a page that is mapped in the virtual address space but not loaded in physical memory .

Implementing threads in the kernel mode: o The kernel has the thread table and keeps track of all threads No run time system is needed No Thread table in each process

Page 10 of 39

All calls might block a thread are implemented as system calls at considerably greater cost than a call to a run time system procedure. If a thread is blocked the scheduler can then schedule a thread from the same or from another process .

Advantage of implementing threads in kernel : There are no problems with bloacking systems scheduling can be done from the os

Disadvantage of implementing threads in users kernal: the cost of system calls is much higher

inter process communication: processes frequently need to communicate with each other issues with inter process communication: o how one process can pass information to another o two or more process do not get in each others way o process sequencing when dependencies are present like when S A get data and pass it to B to print so B needs to wait until A pass data Race condition : situation like where two or more process are reading or writing some shared data and the final result depends on who runs precisely when are called race conditions Critical regions : we need mutual exclusion , that is some way of making sure that if one process is using a shared variable or file the other processes will be excluded from doing the same thing. Critical section is : o A part of the program where the shared memory is accessed For conditions to have good solution :
Page 11 of 39

o no two processes may be simultaneously inside their critical regions o no assumptions may be made about speeds or the number of cpus o no process running outside its critical region may block other processes o no process should have to wait for ever to enter its critical region Mutual exclusion with busy waiting : o mutual exclusion : o the idea is to ensure that only one processor enters its critical region at the same time o busy waiting is technique in which a process repeatedly checks to see if a condition is true Disabaling interrupts : on a single processor system the simplest solution is to have each process diable all interrupts just after entering its critical region and re-enable them just before leaving it. Disadvantage: o Does not help if a system has multiple cpu Lock variables: 0 means that no process is in its critical reagion and 1 means that some process is in its critical reagion. Disadvantage : o A process could read lock and then be disabled Strict Alternation: o Here we assume that we have only two processes and variable turn- value could be 0 or 1 o Turn = 0 means process A can go into its critical region . after leaving the critical region it sets turn to 1 o Lock = 1 means that process B can go into its critical region . After leaving the critical region it sets turn to 0 o Advantage : the approach works it can begerealise to several process o Disadvantage : the process have to take turns
Page 12 of 39

o Also ifprocess A keft the cirical region it can only go into the critical region if B was in its critical region . this might never happened

Testing variable until some value appears is called busy wating.

TSL instruction: the CPU executing the TSL instruction loacks the memory bus to prohibit other CPUs from accessing memory until its done. Four instructoure : o Copies the old value of lock to the register and then sets lock to 1 o Priority / Sleep and wake up Sleep : o Sleep is a system call that causes the caller to block that is be suspended until another process wakes it up The wakeup: Semaphores: using integer variable to count the number of wakeups saved for future use. For example value =1 o A semaphore could have the value 0 indicating that no wakeups were save or some positive value if one or more wakeups were pending o How it works : There is two operation down and up : The down operation on a semaphore checks to see if the value is greater than 0 ( if value > 0) then It decrements the value and just continues ( now value = 0)
Page 13 of 39

The wakeup call has one parameter the process to be awakened

If the value is 0 the process is put to sleep without completing the down for the moment ( if down == 0 then go to sleep )

o Checking the value and changing it and possibly going to sleep are all done as single indivisible atomic action o It guaranteed that once a semaphore operation has started no other process can access the semaphore until the operation has completed. o Up and down can be implemented as system calls o This avoids race conditions o Summery : Process can communicate with each other with signals , we can discontinue any time and anywhere Check the semaphore value If the value positive the process can use the shared rigon or critical region in this case should take out one value from semaphores to show that the semaphore is in used If the value of the semaphore is 0 or less than a zero the semaphore will go to sleep till semaphore get the positive value When semaphore is done with the shared rigon , will add one value to semaphore and finaly when the semaphore is 0 or more all the semaphores that been sleep will wake up Atomic actions : o In which a group of related operations are either all performed without interruption or not performed at all . Producer and consumer problem: Problem is when producer whants to put something in buffer but the buffer is full producer goes to sleep And when consumer whants to take out and there is nothing in the buffer consumer goes to sleep to keep track of the number of items in the buffer we will need a variable count o
Page 14 of 39

Producer and consumer with semaphores: The solution has 3 semaphores :1- full 2-empty 3-mutex Full : counts the number of used slots Empty : number of empty slots Mutex : makes sure the producer and consumer do not access the buffer at the same time

Share memory with semaphores : Semaphores can be stored in the kernel and accessed via system calls Many os allow processes to share some of their memory space

The mutex : It s simplified version of semaphores if counting is not needed Its designed to guarantee that only one process at a time will be reading or writing the buffer and the associated variable It has only locked or unlocked state o If it is in unlocked then the calling thread is free to enter the region It uses if the process wants to enter the critical region Mutexes can be implemented in user space if tsl instruction is available

Monitors : Are language concept that makes it easier to write correct programs Is collection of : procedures-variables and structures. Only one process can be active in monitor at any point

Semaphore synchronization : The full and empty semaphores are needed to guarantee that certain event sequence do or do not occure. In the case of producer and consumer they ensure that the producer stops running when the buffer is full and that the consumer stops running when its empty this use is different from mutual exclusion .

Page 15 of 39

Message passing : Is a form of commuinication used in parallel computing and interprocess communication . in this model process or objects can send and receive messages to other processes Problem with message passing : Message can be lost by the network To guard against lost message sender and receiver can agree that as soon as a message has been received the receiver will send back a special acknowledgement message .if the sender has not received the acknowledgement within a certain time interval , it retransmit the message. Scheduling: the scheduler decides which jobs to run scheduling applies both to threads and processes if only one cpu is available a choice has to be made which process to run next the part of the operating system that makes the choice is called the scheduler. When the kernel manages threads scheduling is usually done per thread with little or no regard to which process the thread belongs . When to do scheduling : A new process is created A process exist A process blocks An I/O interrupt occurs Time for process is over When a new process is created a decision needs to be made whether to run the parent process or the child process /since both process are in ready state it is anormal schdualing decision and can go either way

Page 16 of 39

When a scheduling decision must be made when a process exit . that process can no longer run ( since it no longer exisist ) so some other process must be chose from the set of ready processes if no process is ready a system supplied idle process is normally run

When a process blocks on I/O on a semaphore or for some other reason another process has to be selected to run When an I/O interrupt occurs a scheduling decision may be made

Categorise of scheduling Algoritm: Batch file system long time periods for each process are often acceptable . o This approach redeuce process switches and this improves performance batch system : o first come first serve o all jobs are stored in queue o when first job out of the queue will be scheduled o when the running job blocks the first job of the queue is scheduled and the blocked job is put at the end of the queue

Advantage of batch system : o Easy to program and o Easy to understand and also its fair

Batch System Shortest job : When several equally important jobs are siting in the input queue wating to be started the scheduler picks the shortest job first. Shortest job first is only optimal when all the jobs are available simultaneously .

Shortest remaning time : we assume that we have an estimation of the running time of the job the scheduling rule is to run the shortest job first disadvantage is : long jobs will wait in queue for ages.

Page 17 of 39

Round robin scheduling : each process is assigned a time interaval called its quantum , during which its aalowed to run . if the process is still running at the end of the quantum the cpu is pre-empted and giving to another process . if the process has blocked or finished before the quantum has elapsed the cpu switching is done when the process blocks. o Assigbed to each processin equal portions and in circular way One issue is the length of the quantum Switching from one process to another requires a certain amount of time for administration Setting the quantum too short cayses too many process switches and lowers the cpu efficiency but setting it too long may cause poor response to short interactive requests . Round robing scheduling makes the implicit assumption that all process are equally important Batch system: A batch process is one that runs without user interaction such as the exvution of a script often batch processes are run automatically by some kind of scheduler or in response to sumbmission in to a queue. Priority scheduling : Each process is assigned a priority and the runnable process with the highest priority is allowed to run When process swith occurs: o To prevent high priority process from running indefinitely the scheduler may decrease the priority of the currently running process at each clock tick .if this action causes its priority to drop below that of the next highest process a process switch occurs. o Each process may be assigned a maximum time quantum that it is allowed to run . when this quantum is used up the next highest priority process is given a chance to run Multiple queues:
Page 18 of 39

Shortest process next: Problem with this is figuring out which of the currently runnable process is the shortest one ? o One approach is to make estimates based on past behavior and run the process with the shortest estimated running time . The technique of estimating the next value in a series by taking the weighted average of the current measured value and the previous estimate is sometimes called aging . Guaranteed scheduling Make promises to the user about performance and then live up to those promises o If there are n users logged in while you are working you will recive about 1/n of the cpu power o To make good on this promise the system must keep track of how much cpu each process has had sonce its creation o The algortim is to run the process with the lowest ration until its ratio has moved above its closest competitor.

Lottery scheduling : Whenever a scheduling decision has to be made a lottery ticket is chosen at random and the process holding that ticket gets the resource Process are each assigned some number of lottery tickets and the scheduler draws a random ticket to select the next process Advantages: o Very flexible : important job more ticket less important less ticket Disadvantage: Random choise

Scheduling in real time systems: the system should in specific time give a result real time system are generally categorized as hard real time o meaning there are absolute dedlines that must be met or else
Page 19 of 39

soft real time : o meaning that missing an occasional deadline is undesirable but mvertheless tolerable

the events that a rel time system may have to respond to can be further categorized as periodic or aperiodic a real time system that meets this criterion is said to be schedulabe real time scheduling algorithm can be static or dynamic o static scheduling only works when there is perfect information available for example for periodic tasks.

Static works when o There is perfect information available for example for periodic tasks Real time behaviour is usually achieved by dividing the task into a number of process

Thread scheduling: Recall: thread scheduling part of the os that responsible for sharing the available cpus between various threads. Thread scheduling tends to be based on at leas the fowllowing : o A priority = o Quantum : or number of allocated timeslices of cpu which essentially determines the amount of cpu time a thread is allocated before it is forced to quit for another thread with same or less priority o Thread scheduling in User level : o the kernel no nothing about the threads that threads exist o the processes decide which thread should run . this could be one thread or several threads o Implemented in user level libaraies rather than system calls o So the thread switching does not need to call operating system and to cause interrupt to the kernel .the kernel knows nothing about user level Thread scheduling in Kernel level :
Page 20 of 39

o kernel schedules the threads o Kernel knows a lot about threads so can gives more time to a process that having a larg number of threads than process having small number of threads o Disadvantage : Slow and inefficient The major difference between user level threads and kernel level threads is the performance. Doing a thread switch with user level threads takes a handful of machine instructions. With kenral lelvel threads it requires a full context switch changing the memory map and invalidating the cache which is several orders of magnitude slower.on the other hand with kernel level threads having a thread block on I/O does not suspend the entire process as it does with user level threads. Rate monitor scheduling : This is the classical scheduling for preemptable and periodic processes : o Can be used for processes that meet the following : Each periodic process must completed within its period Noprocess is depends on other process All non periodic processes have no deadlines

Erliest Deadline first : Cpu holds a list of all ready jobs sorted on deadline The scheduler schedules the job with the closest deadline If a job with a deadline arrives that is earlier than the deadline of running job then running job is pre-empted If we have processes with several threads we have parallelism on multiple layers Information needed to to prococess switching instead of having interrupts : The solution could have regester containing a pointer to the current process table entry

Page 21 of 39

When I/O completed the cpu would store the current machine state in the current process table entry then for intruption will go to intrupt the vector for the interrupting device and fetch pointer to another process table entry

Why intrupt handeler written in assambely language Generally you can not have access to cpu hardware that in high level language intrupt handeler may required to enable and disable the intrerrupt servicing a particular device. When an interrupt or a system call transfers control to the operating system , a kernel stack area sparate from the stack of the interrupted process is generally used . why ? You do not want operating system to crash because of poorly writen for user program to not allow for enough stack space. If the kernel leaves stack data in a usr programs memory space upon return from a system call a malicious user might to able have the information about other process If we use user leavel thread then we could end up blocking the whole thread But kernel threads are used to permit some threads to block with out affecting the others. Why would thread ever voluntarily give up the cpu by calling ? when there is no periodic clock interrupt it may never tet cup back Threads are process cooperate.if termination is good of the application then thread will yield. Can a thread ever be preempted by clock interrupt ? under what circumstance ? User level can not be prompted by clock unless the whole process quantum has been used up .however kernel level threads can be prompted individualy. If the thread runs for too long then the clock will stop the current running thread . and kernel is free to pick another one from same process What is the biggest advantage of implementing threads in user space ? and disadvantages : Biggest advantage is efficiency . No traps to the kernel are needed to switch threads Disadvantage: if the thread blocks the entire process blocks .
Page 22 of 39

In the system with threads is there one stack per thread or stack per process when we are I user and kernel level : It must have its own stack for local variables .and for both (user and kernel ) are same

If a system has only two processes does it make sense to use a barrier to synchronize them ? With kernel threads a theread can block on a semaphore and the kernel can run some other thread in same process /with user level threads when threads blocks on a semaphore the kernel thinks the entire process is blocked and does not run it ever again In linux processes are connected by pipes.

After Midterm Chapter 3: Memory management : Part of the operating system that manages the memory hierarchy is called memory manager Its job is to keep track of which parts of memory are in use and which parts are not in use to allocate memory to processes when they need it and de allocate it when they are done and to manage swapping between main memory and disk when main memory is too small to hold all the processes Job of memory management : Keep track which parts of the memory are used or not Allocate memory when needed Deallocate memory when processes are done

Address space : is the set of addresses that a process can use to address memory Usually each process has its own address space

Page 23 of 39

The resone of having a address space is to let multiple application to be in memory at same time without interfering with each other . Basic way of mapping the address space onto physical memory : Programs are loaded into consecutive memory blocks For every program that is stored into the block of memory there is begin and end Cpu has two additional registers o Base and limit o They contain the memory information of the running process When process is running : o Base is loaded with the physical address where the programs address space begins o Limit is loaded with the length of the address space Whenever process refers to memory in the address space then cpu automatically adds the contents base to address space Disadvantage of this way : Base ( which is loaded with the physical address where the programs address space begins ) has to added in every steps which is expensive Address space has to store in consecutive blocks

and because of this memory space is to small for all active processes therefore some of the data needs to store somewhere : two ways : Swapping and Virtual memory

By moving process back and forth between main memory and disk during execution : Swapping : consists of bring in each process in its entirely, running it for a while then putting it back on the disk Subnet of the active process is hold in main memory Each process is entirely in main memory or not at all
Page 24 of 39

If there is not enough space for a new process then some of the process needs to be swapped out The area that process is stored can change over time therefore the address space has to be adjusted as well

Problem with this method : Swapping in and out process will create holes in the memory combining all the hols is called memory compaction and its expensive if data segment of process grow then reserved data space might be too small

managing to keep track of memory usage : bitmap and free lists : bitmap : memory is divided into blocks its a vector and the length is the number of allocation units i th bit of the bitmap is corresponds to the ith allocation unit o if the entry is one the unit is taken o if the entry is zero then unit is free linked lists : consecutive blocks of empty memory are grouped together as a block each block of entry is contains : o the address where it starts o the length of the block o and the pointer to the next entry Virtual memory : The basic idea behind virtual memory is that each program has its own address space which is broken up into chunk called pages , each page is a continues range of addresses .these pages are mapped onto physical memory but not all pages have to be in physical memory to run the program
Page 25 of 39

Virtual memory : Allows program to run even when they are only partially in main memory The basic idea is to split a program into pieces called overlays and swap them in and out from memory as needed . How it works : the address space of every program is broke up into pages A pages is continuous range of addresses These pages are mapped onto physical memory But not all pages have to be in main memory and others could be on the hard disk And if the program access data that is not in the physical main memory then OS will get the missing pages . Memory management unit Has entry for every virtual pages which gives the number of page frame When the program wants to access a memory cell then memory management unit will calculates the page in which the virtual address falls via map If the page is in memory then mmu translate the address and put it onto bus If not then page fault occurs

Page fault : When a process tries to access a page that is not available in main memory , cpu is caused to trap to operating system this trap is called page fault .

Speeding up the paging : The execution speed of cpu is all depends on time needed to get instruction and data out of the memory and If the page table is in the main memory then the performance will be highr But there is a problem : Efficiency :
Page 26 of 39

o The conversion has to be done for every instruction o Storage space TLB(translation lookaside buffer): to speed up the lookup process computers are equipped with small hardware device for mapping virtual addresses to physical address without going thorough the page table , the device called tlb TLB contains a subset of the page table the rest is in main memory Tlb is a small memory in the cpu and keep the page table Helps for improve virtual address translation speed Advantage : o Simpler and needs less space on the cpu chip Page replacement algorithm : is good that not heavily used page to be not removed Optimal algorithm( o The page that will not be used for the longest period of time is to be replaced o Easy to describe and impossible to implement o For every page the number of instructions before the page will be use again o Always remove the page that has biggest counter o Problem : At the time of the page fault the operating system has no way of knowing when each of the pages will be referenced next Not recently used o R whenever the page is referenced (read or write ) o M when page is modified o When page fault occurs then operating system inspects all the pages and divides theme in to four categories based on R and M bits and removes a page at random from the lowest number nonempty class FIFO ( first in first out ) o Queue o The advantage is easy to use o Os maintain a list that contains all pages currently in memory with page at the head of the list is the olds page and the page at the tail is the most recent arrivial on
Page 27 of 39

a page fault the page at the head will be remove and the new page added to the tail of the list Clock replacement algorithm : o Pages are hold in a circular list o And then pointer points to the oldest page o When page fault occurs then page with the pointer is inspected If R is zero page will be removed If R is one Then R will be clean New page will insert to clock

Least recently used (LRU) o Assuming pages that were heavily used by the last few instructions will probably be heavily used in future o List with order is kept in the memory o The first page in the list is the most recently used page and the last page in list is the least recently used page o Notice that list has to be update every memory access o Throws out the pages that has been unused for the longest time

Not frequently used and least frequently used o A software counter is associated with each page initially zero , at each clock interrupt the operating system scans all the pages in memory for each page the R bit which is 0 or 1 is added to the counter. When a page fault occurs the page with the lowest counter is chosen for replacement

Aging Algorithm : o Same as Not frequently used except that the counters are each shifted right 1 bit is added in the right R bit is added to the left most rather than the rightmost bit

Working set : o The set of pages that process is using it now is called working set and when the page fault happened then find a page which is not in the working set and evict

MMU = memory management unite


Page 28 of 39

Design issues : Local o Os assigns a fixed number of frames to every process o If the process creates a page fault then one of its own pages has to go o Advantage : o Global o For the global frames do not belong to a process o When page fault happened then any pages can go o Therefore every time number of frames that assigned to process is different over a time o Advantage: Make good use of memory One process can use most of the memory o Disadvantage: Its fair to assign a share of the memory to every process Page could be deleted even if there are many pages that are free o Disadvantage:

Designing issue of local control : When combined working sets of all algorithms exceeds the capacity of the memory then page faults happening All processes needs more memory however non on theme willing to give up frames Then swapping comes back o The os reduces the number of processes competing for memory by swapping some processes out of the disk o If still there is a problem them more process have to swapped out Segmentation issues: Solution to external fragmentation (external fragmentation exists when some portion of a fixed size partition is not being used by process.)
Page 29 of 39

Segmentation: Reference to a memory location includes a value that identifies a segment and an offset within that segment We said virtual memory is divided into two categories : Paging and segmentation : Machine with many completely independent address spaces called segments Is memory management scheme that supports view of program as a collection of functions and data structures Each segment has number and length

Each segment consists of linear list of addresses from 0 to max Each segment has separate address space

Paging : The virtual address space is divided up into units called pages.the corresponding units in the physical memory are called page frames. The pages and page frames are always same size and its typically a power of 2 Every address generated by cpu is divided into two parts : Page number P o Is used as index into page table Page offset Page table : contains the base address of each page in physical memory

Problem with paging : Page table can be extremely large o This can be solve by multilevel page tabeles :
Page 30 of 39

No need to keeping all the page tables in memory all the time , those are not needed should not be kept around

Mapping must be fast

Computer can store and retrieve data from secondary storage for use in main memory . the operating system retruves data from secondary storage in same size blocks called pages Extra key notes: Memory can be manage in two ways : Swapping : bring each process all and run it then put it back to disk Virtual memory : let the program to run even when they are only part of it is in the o Idea is : split the program into pieces called overlays and swap theme in and out from memory as needed o memory by : Paging : by dividing programs into equal size of block the virtual address space divide into small units called page each unite of the page is called page frame . how the paging works : o every address that generate by cpu divde into two parts 1page number P and page offset d, o we use page number as index for page table o and page table contains the base address of each page in physical memory Segmentation : A user thinks of program as a collection of function and data structures segmentation is supporting this view of memory A logic address space is a collection of segments , each segment has number and length o Problem with VM(paging ): Page table will be huge Can be fixed by multilevel page tables Maping must be very fast
Page 31 of 39

Can be fixed by using TLB in cpu cache

o When we use Vm there are many page replacement how we can ensure replacing a frame which is not heavily used : o By different page replacement : Optimal page replacement : Page that is not used for longest period of time is to be replaced Like page uses 8 and 6 we should remove 6 When page fault occurs based on the R and M bits NRU removes a page at random based on the lowest numbered nonempty class First in first out List of all pages that are in currely in memory page that is in the head of the list is the oldes and the page at the end of the list tail the most recent arrival , when page fault happened the page at the head is removed and new page will add to tail Second chance Least recent used : Remove the page that not been used for longest time the page with lowest counter will chosen for replacement At all clock interrupt the os scans all the pages in memory for each page R bits Again algortim Working set : Set of page that a process is using now is called working set when page fault happened the page that are not in the working set will be replace If the entire working set is in memory the process will run without causing many faults until it moves into another execution phase Memory management with : two ways of keep track of memory usage Memory management with Bitmap :
Page 32 of 39

Not recently used

Not frequently used :

o Memory divided into small units like a word or larg as KB o Curresponding to each bitmap if unit is 0 then free if is 1 then is occupied o There is problem with chosing the right size of the unit if it small then the bitmap is large if the units s large the bitmap will be small o But advantage is the simplest way of keep track of memory usage o The disadvantage is when to bring the k unit process into memory the memory manager must search all the bitmap to find and searching is slow Memory management with linked list o A linked list of allocated and free memory segments is main tend o Each entry in the list is either a hold or process at which starts and the length and pointer to the next entry o To find the empty memory the memory management uses : First fit MM scane the whole list to find a hold that is big enough Serch starts at the last time search Search the entire list to find the smallest hole Next fit Best fit Worst fit

Memory fragmentation: Internal fragmentation : exist when some portion of fixed size partition is not used by process TLB: Because the speed of the lookup process of the page table is super slow then the solution is small hardware device for mapping virtual address to physical address without going through the page table Recall page fault : When process try to access a page that is not available in the main memory CPU is caused to trap the operating system this is page fault
Page 33 of 39

Chapter 4 Dead luck: When deadlock occur or what are the conditions for deadlock to occur? Conditions of dead lock : o Mutual exclusion : each resource is either currently assigned to exactly one process or is available Example : accessing a printer Principle : avoid assigning a resource unless absolutely necessary

o Hold and wait : Processes currently holding resources that granted earlier and can request new resource o A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes Request all the resources in the beginning When issuing a request then release the previously held resources temporarily o No pre-emption : resources that previously granted cannot be forcibly taken away from a process. They must be explicitly released by the process holding theme o That means reources can be released only voluntarily by the process holding it after that process has completed its task Example is printer daemon o Circular wait : there must be a circular chain of two or more processes each of which is waiting for resource held by the next member of the chain . Only one resource a time Enforce a global ordering on the resources

How can deadlocks be handled , just ignore the problem -Ostrich Algorithm What to do : Detection and recovery : let deadlocks occur then detect theme and take action o One resource of each type o Deadlock detection : Detection with single instance of each resource type or class :
Page 34 of 39

Resource allocation graph is check to see whether cycles is exist in there or not

Detection with multiple instance A matrix based alg is used to determine whether an unmarked processes exist .

We know how to detect a deadlock but when to check ? Check every time a resource request is made : o Advantage is : detects deadlocks as early as possible o Disadvantage : potentially expensive in terms of CPU time Every K minutes When cpu utilization drops

Recovery from deadlock: recovering through terminating or killing the process Abort all deadlocked processes Abort one process at a time untile deadlock cycle is eliminated A process not in the cycle can be chosen as the victiom in order to release its resources Pre-emption : o Attacking the mutual exclusion condion by sharing all the resources o Attacking hold and wait by each process request and be allocate all its resources before it begins executions o Each process is to request resources only when it has none Rollback : o Return to moment in past o State snapshot must be saved like checkpoints Killing the process o Simply choose a victim and hope to be lucky Multiple resources of each type : deadlock detection via matrix Dead lock avoidance :

Page 35 of 39

Getting request for resources , o Choice to give a grant or reject A state is said to be safe if : o Its not deadlocked o There is some scheduling order in which every process can run to completion even if all of them request their max number of resouces immediately

A safe state : guarantees that all processes will finish Unsafe state :

Bankers Alg: Upon receiving a resource request : o Determine whether granting leads to safe or unsafe : o If safe grant the resource o If not reject the request 4 condition of having a deadlock : Mutual exclusion : o At least one resource must be held in a non sharable mode that is only one process at a time can use the resources Hold and wait o a process must be holding at least one resourse and waiting to acuire additional recourses that are being hold by others No pre-emption o Resourse can not be pre-empted that is resource can be release only vountarilty Circular of wait o There must be a circular chain of two or more processes each of which is wating for a resource held by the next member of the chain Methods for handeling deadloack : Just ignore the problem o Advantage : Deadlock is rare
Page 36 of 39

o Disadvantage : When it happends you dont have any idea what has happened Detection and recovery o Not to prevent the deadlock but let it happened and then take some action o You can detect the deadlock with two ways : o o Prevention : o Deadlock prevention provides a set of methods for ensuring that at least one of the necessary codndition for occurring deadlocl can not hold o How : Attacing mutual exclution : If no resource were ever assighned exclusively to a single process we would never have deadlock Attacing hold and wait: To ensure that hold and wait condition never occurs in the system we must guarantee that whenever a process request a resource it does not hold any other resourse Attacking no preemtion condition : This technique is often applied to resources whos state can be easily saved and restored later such as cpu registers and memory space but not a printer during the job Attacking the circular wait condition
Page 37 of 39

Single resources : graph Multiple resourse : matrix Abort all the deadlocked process : It is highly expensive Abort one by one

Recovery :

to imposte total ordering of all resource types and to reuire that each process request resources in an increasing order of enumeration

Deadlock Avoidance: o Deadlock avoidance requires that the OS be given in advance additional information concerning which resource a process will request and use during its lifetime

Safe and unsafe state: A state is said to be safe if it is not deadlocked and there is some scheduling order in which every process can run to completion even if all of theme suddenly request thir maximum number of resources immediately The bankers algorithm : For single resourses o The banker algorithm considers each request as it occurs and sees if granting it leads to a safe state if it does the request is granted otherwise it postponed until later o To see if a state is safe the banker checks to see if he has enough resources to satisfy some customers For multiple resource : o the difference is that dead lock detection algorithm is used to detect whether a deadlock has already occurred o where is banker algorithm is used to determine whether a dead lock will occur . Two phase locking : in the first phase the process tries to lock all the recors it needs one at a time o if it succeds it begins the second phase performing its updates and releasing the locks no real work done in first phase Starvation : some policy is needed to make a decision about who gets which resource when for example the smallest file to print

When to look for deadlock : check every time a resource requesd has made :
Page 38 of 39

Advantage : you can find a deadlock really fast Diadvantge : expensive in terms of cpu time

Chapter 4 file management : Inode :itsa afile system that has information about file directorys How can files be implemented Linked list allocation Linked list using table in memory Inodes

How can free blocks be kept track : Free list technique Bitmap

A method to sorting files: Linked list: To keep each one as linked list of disk blocks The first word of each block is used as pointer to the next one

Chapter 9 security : RSA public key cryptosystem: Encryption of messages such that eavesdroppers who overhear communication cannot decode theme Generate digital signatures that can be checked by anyone but not forged

Page 39 of 39

You might also like