You are on page 1of 79

Chapter 4

Memory Management
Topics to be covered:
4.1 Memory address, Swapping and Managing free
memory space
4.2 Resident Monitor
4.3 Multiprogramming with fixed partition
4.4 Multiprogramming with variable partition
4.5 Multiple Base Registers
4.6 Virtual Memory Management
4.6.1 Paging
4.6.2 Segmentation
4.6.3 Paged Segmentation
4.7 Demand Paging
4.8 Performance
4.9 Page Replacement Algorithms
4.10 Allocation of Frames
1
4.11 Thrashing -Anil Verma @ IOE, Pulchow
Memory Management Basics

• Don’t have infinite RAM


• Do have a memory hierarchy-
• Cache (fast)
• Main(medium)
• Disk(slow)
• Memory manager has the job of using this
hierarchy to create an abstraction (illusion)
of easily accessible memory
• Two important memory management
function:
• Sharing
• Protection
Memory Hierarchy
.

Cost/bit Cache
increases
Memory
Storage
access
speed Main
increases
Memory
Storage
capacity
decrease
s Secondary
Memory

3
Monoprogramming Model

Only one program at a time in main memory;


Can use whole of the available memory.
4
One program at a time

• Can only have one program in memory at a time.


• Bug in user program can trash the OS (a and c)
• Second on some embedded systems
• Third on MS-DOS (early PCs) -part in ROM called BIOS
• Since only one program or process resides in memory
at a time, so sharing and protection is not an issue.
• However, the protection of OS program from the user
code is must otherwise the system will crash down.
This protection is done by a special hardware
mechanism such as a dedicated register, called as a
Fence Register. HOW?
• The Fence Register is set to
highest address occupied by
the OS code.
• A memory address generated
by user program to access
certain memory location is
first compared with the fence
register’s content.
• If the address generated is
below the fence, it will be
trapped and denied Fence Register
permission.
• Since the modification of fence
register is considered as a
privileged operation,
therefore, only OS is allowed
to make any changes to it.

6
Address Binding
(Relocation)
• Our program is stored in RAM only
• A program has to go through these three phases:
Compilation, Loading and Execution
• The problem that arises is that where should an OS
store the results of programs after execution.
• A user specifies in his instruction where to store the
result.
i.e. x=(a+b)× (a-c)
• Such address given by the user, like x, are called
symbolic or logical address.
• These addresses need to be mapped to real physical
addresses in memory.
• The mechanism is called address binding. 7
Binding of Instructions and Data to Memory

• Compile time
– known memory location
– absolute code can be generated
– must recompile code if starting location
changes.
• Load time
– generate relocatable code if memory location
is not known at compile time.
• Execution time
– process can be moved during its execution
from one memory segment to another.
– need hardware support for address mapping.
8
Logical versus Physical Address
Space
• A logical address space that is bound to a
separate physical address space.
– Logical address: generated by the CPU; also
referred to as virtual address.
– Physical address : address generated by the
memory management unit.
• Logical and physical addresses are the same
in compile-time and load-time address-
binding schemes.
• Logical (virtual) and physical addresses differ
in execution-time address-binding scheme.
9
Program Relocation
• Refers to the ability to load and execute a given
program into an arbitrary place in the memory.
• Relocation is a mechanism to convert the logical
address into a physical address.
• To do this, there is a special register in CPU called
relocation register.
• Every address used in the program is relocated as:
effective physical address= Logical address + Contents of
Relocation register
• MMU is a special hardware which performs address
binding, uses relocation scheme.

10
Memory Management Unit(MMU)

• MMU generates physical address from


virtual address provided by the program.

MMU maps virtual addresses to physical addresses and puts them on memory 11
bus
• Two basic types of
relocation:
– Static Relocation:
Formed during the
loading of the program
into memory by a
loader.
– Dynamic Relocation:
mapping from the
virtual address space to
physical address is
performed at
execution-time.

12
Protection
• Providing security from
unauthorized usage of
memory. Hardware Protection Mechanism
• OS can protect the memory
with the help of base and
limit registers.
• Base register consist of
the starting address of the
next process, the limit
register specifies the
boundary of that job.
• That is why the limit
register is also called a
fencing register.

13
Memory Allocation
Techniques
• Two types:
– Contiguous Storage Allocation
• Fixed Partition Allocation
• Variable Partition Allocation
– Non-Contiguous
• Paging
• Segmentation

14
Contiguous Storage
Allocation
• A memory resident program occupies
a single contiguous block of physical
memory. The memory is partitioned
into block of different sizes to
accommodate the programs. The
partitioning may be:
– Fixed Partition allocation
– Variable Partition allocation

15
Fixed Partition
allocation/Multiprogramming with fixed
partition
• In multiprogramming environment, several
programs reside in primary memory at a time
and the CPU passes its control rapidly between
these programs.
• One way to support multiprogramming is to
divide the main memory into several partitions
each of which is allocated to a single process.
• Depending on how and when the partitions are
created, there may be two types of partitioning:
– Static partitioning
– Dynamic partitioning

16
• Static Partitioning: implies that the
division of memory into number of
partitions and its size is made in the
beginning prior to the execution of
user programs and remains fixed
thereafter.
• Dynamic Partitioning: the size and
the number of partitions are decided
during the execution time by the OS.
17
• Operation Principle:
– We divide the memory into several fixed
size partitions where each partition will
accommodate only one program for
execution. The number of programs (i.e.
degree of multiprogramming) residing in
the memory will be bound by the
number of partitions. When a program
terminates, that partition is freed for
another program waiting in a queue.
18
Multiprogramming with Fixed
Partitions

• Fixed memory partitions


– separate input queues for each partition
– single input queue

19
Modeling Multiprogramming
(Probabilistic viewpoint of CPU usage)

• Let p=the fraction of time waiting for I/O


to complete.
n= no. of processes in the memory at
once.
The probability that all n processes are
waiting for I/O(CPU idle time) =pn
So, CPU utilization = 1- pn
• Following diagram shows the CPU
utilization as a function of n(degree of
multiprogramming)
20
Modeling Multiprogramming

Degree of multiprogramming

CPU utilization as a function of number of processes in memory


21
Modeling Multiprogramming

Example
Let, total memory = 1M =1000K
Memory space occupied by OS = 200K
Memory space taken by an user program =
200K
NOW
Number of processes n= (1000-200)/200 =4
[n= total user space/size of an user program]
CPU utilization = 1- (0.8)4 =60%

22
Modeling Multiprogramming
• Add another 1M memory, then
n= (2000-200)/200 =9
CPU utilization = 1- (0.8)9 =87%
Improvement = (87-60)/60 = 45%
• Again add another 1M, then
n= (3000-200)/200 =14
CPU utilization = 1- (0.8)14 =96%
Improvement = (96-87)/87 = 10%
improvement
Conclusion: addition of last 1M is not logical
23
Advantages of fixed
partition
• Implementation of this allocation
scheme is simple.
• The overhead of processing is also
slow.
• It supports multiprogramming.
• It requires no special costly
hardware.
• It makes efficient utilization of
processor and I/O devices.
24
Disadvantages of fixed
partitioning
• No single program (process) may exceed the
size of the largest partition in a given system.
• It does not support a system having dynamic
data structure such as stack, queue, heap etc.
• It limits the degree of multiprogramming which
in turn may reduce the effectiveness of short-
term scheduling.
• The main problem is the wastage of memory by
programs that are smaller than their partitions.
This is known as Internal Fragmentation.

25
How to run more programs then fit in main
memory at once

• Can’t keep all processes in main memory


• Too many (hundreds)
• Too big (e.g. 200 MB program)
• Two approaches
• Swap-bring program in and run it for awhile
• Virtual memory-allow program to run even if only
part of it is in main memory

26
Variable-partition allocation/multiprogramming
with Dynamic partition

• Create partitions dynamically to


meet the requirements of each
requesting users.
• When a process terminates or is
swapped-out then the memory
manager can return the vacated
space to the pool of free memory
area from which partition allocations
are made.
27
Multiprogramming with Variable
Partitions
• It is solution to the wastage in fixed
partition.
• It allows the jobs to occupy as much
space as they need. Job-C
Job-C Job-C

Job-B Job-B Job-B

D-IN
A
Job-A Job-D
A-OUT
D
OS OS OS

28
Multiprogramming with
Variable Partitions
.

29
Multiprogramming with
Variable Partitions
• A process can grow if there is an adjacent hole.

• Otherwise the growing process is moved to the hole large


enough for it or swap out one or more processes to disk.

• It has also some degree of waste.

• When a process finishes and leaves hole, the hole may not
be large enough to place new job.

• Thus, variable partition multiprogramming, waste does


occur.

• Following two activities should be taken place, to reduce


wastage of memory: (a) Coalescing (b) Compaction
30
0
K OS
50 K P5 P4 P3 P2 P1
300 100 50 300 150
User
Progra 0
ms OS OS OS
50
650 P1 Free (150 P4 100 K
K 200 P2 K) 50 K
500 P3 P2 (300 K)
P2 300K
P3 (50 K) P3 50 K
550 Free
(100K) Free 100K
Free 100 K
P1 terminates
 We have 150K and P4 is of 100K only, so 50 K is wasted.P4 Swapped in
 This is actually known as external fragmentation, i.e. when a free prtition
is too small to accommodate any process.
 When the size of the memory is large enough for a requesting process,
but if cannot satisfy a request because it is not contiguous ,then the
storage is fragmented into small no. of holes (free spaces).
tis results in external fragmentation.
31
Multiprogramming with
Variable Partitions
(a)Coalescing
The process of merging two
adjacent holes to form a single
larger hole is called coalescing.
OS OS
20K
hole 20+10=
10K 30K hole
hole
Process- Process-
A A
FREE FREE
10K 10K
32
Multiprogramming with
Variable Partitions
(b) Compaction
• Even when holes are coalesced, no
individual hole may be large enough to
hold the job, although the sum of holes is
larger than the storage required for a
process.
• It is possible to combine all the holes into
one big one by moving all the processes
downward as far as possible; this
technique is called memory compaction.
33
Compaction
.

OS OS
20K Process-
hole A
Process- Process-
.
A . . B
10K
hole FREE
Process- 20+10+
B 10=40K
FREE
10K

34
Fixed vs. Variable
Partitioning
Fixed
• It the Partitioning
the OS that Variable Partitioning
decides the partition • OS has to decide about
size only once at the partition size, every time a
new process is chosen by
system boot time. long-term scheduler.
• Here, the degree of • Here, the degree of
multiprogramming is multiprogramming will vary
fixed. depending on program
size.
• It leads to internal
• It leads to external
fragmentation. fragmentation.
• IBM-360 DOS and • IBM OS/MFT used this
OS/MFT OS used this approach also.
approach.
35
Drawbacks of Compaction
• Reallocation info must be maintained.
• System must stop everything during
compaction.
• Memory compaction requires lots of CPU
time.
For example:
On a 256MB machine that can copy 4
bytes in 40nses, it takes 2.7sec to
compact all of memory.

36
Swapping
• If there is not enough main memory
to hold all the currently active
processes, the excess processes
must be kept on the disk and brought
in to run dynamically.
• Swapping consists of moving
processes from main memory and
disk.
• Relocation may be required during
swapping. 37
Swapping, a picture

Can compact holes by copying programs into holes


This takes too much time
38
Memory management With
Bitmap

39
Bitmaps-the picture

(a)Picture of memory
(b)Each bit in bitmap corresponds to a unit of storage (e.g. bytes) in memory
(c) Linked list P: process H: hole

Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Memory management With Bitmap

• With bitmap, memory is divided up into


allocation units(possibly from few words to
several KB).
• Corresponding to each allocation unit is a
bit in the bitmap(0 indicates free and 1
indicates occupied).
• The smaller the allocation units, the larger
the bitmap.
• Disadvantage: Searching a bitmap for a
run of a process(having given length, say
k) is a slow operation.
41
Memory management with
Linked Lists
• Each entry in the list specifies a
hole(H) or process(P).
• This is the address at which it starts,
the length, and a pointer to the next
entry.

42
Memory management with
Linked Lists

Four neighbor combinations for the terminating


process X
43
Memory management with
Linked Lists
In the above diagram:
(a)Replace process X by hole H.
(b) and (c) Two entries are coalesced.
(d) Three entries are merged.

44
Storage placement
Strategies
1. First fit:
The memory manager allocates the first hole
that is big enough. It stops the searching as
soon as it finds a free hole that is large enough.
Advantages: It is a fast algorithm because it
searches as little as possible.
Disadvantages: Not good in terms of storage
utilization.

//Next fit
It works the same way as first fit, except that it keeps track of
where it is whenever it finds a suitable hole.
Storage placement
Strategies
3. Best fit:
– Allocate the smallest hole that is big enough.
– Best fit searches the entire list and takes the
smallest hole that is big enough to hold the new
process.
– Best fit try to find a hole that is close to the actual
size needed.
Advantages: more storage utilization than
first fit.
Disadvantages: slower than first fit because
it requires searching whole list at time.
Storage placement
Strategies
4. Worst fit:
– Allocate the largest hole.
– It search the entire list, and takes the
largest hole, rather than creating a
tinny hole, it produces the largest
leftover hole, which may be more useful.
Advantages: some time it has more
storage utilization than first fit and best
fit.
Disadvantages: not good for both
performance and utilization.
Problem:
In this example Best-fit turns out to be the best,
since it allocates memory for all processes
Paging
• Partition memory into small equal
fixed-size chunks and divide each
process into the same size chunks
• The chunks of a process are called
pages
• The chunks of memory are called
frames
Paging
• Operating system maintains a page
table for each process
– Contains the frame location for each
page in the process
– Memory address consist of a page
number and offset within the page
Processes and Frames
1. System with a number of frames
allocated
2. Process A, stored on disk, consists
A.0 of four pages. When it comes time
to load this process, the operating
A.1 system finds four free frames and
A.2 loads the four pages of process A
into the four frames.
A.3 3. Process B, consisting of three
D.0
B.0 pages, and process C, consisting of
four pages, are subsequently
D.1
B.1 loaded.
B.2
D.2 4. Then process B is suspended and
is swapped out of main memory.
C.0 5. Later, all of the processes in main
C.1 memory are blocked, and the
C.2 operating system needs to bring in
a new process, process D, which
C.3 consists of five pages. The
Operating System loads the pages
D.3
into the available frames and
D.4 updates the page table
Page Table
Pages and Page Frames

• Virtual addresses divided up into units,


called pages and the corresponding units in
physical memory are called page frames.
• 512 bytes-64 KB range
• The pages and page frames are always
the same size.
• Transfer between RAM and disk is in
whole pages
Paging
Mapping of pages to page frames

16 bit addresses, 4 KB pages


32 KB physical memory,
16 virtual pages and 8 page frames
Example:
MOV REG,0
-> Virtual address 0 is sent to MMU.The
MMU sees that this virtual address falls
in page 0(0 to 4095), which is mapped
to page frame 2(8192 to 12287).
->Thus it transforms the address to
8192 & outputs 8192 onto the bus.
Similarly,
MOV REG 8192 is effectively
transformed into MOV REG, 24576.
Page Fault Processing
• Present/absent bit tells whether page is in
memory.
• What happens If address is not in memory?
• Trap to the OS
• OS picks page to write to disk
• Brings page with (needed) address into
memory
• Re-starts instruction
Page Table

• Virtual address={virtual page number, offset}


• Virtual page number used to index into page table
to find page frame number
• If present/absent bit is set to 1, attach page frame
number to the front of the offset, creating the
physical address
• which is sent on the memory bus
Mapping/Paging Mechanism
• Let us see how the incoming virtual address
8196(0010000000000100 in binary) is
mapped to physical address 24580.
 Incoming 16-bit virtual address is split into 4-bit
page number and 12-bit offset.
 With 4-bits, we can have 16 pages and with 12-
bits for the offset, we can address 2 12 =4096
bytes within a page.
 The page number is used as an index into the
page table, gives number of virtual pages.
 Page table contains Present/Absent bit also.
Mapping Mechanism
Structure of Page Table Entry

 Frame number: The actual page frame number.


 Present (1) / Absent (0) bit: Defines whether the virtual page is
currently mapped or not.
 Protection bit: Kinds of access permission; read/write/execution.
 Modified bit: Defies the changed status of the page since last
access.
 Referenced bit: Used for replacement strategy.
 Caching disabled: Used for the system where the mapping into
. device registers rather than memory.
Problems for paging

Virtual to physical mapping is done on every memory


reference
• The page table can be extremely large.
• The mapping must be fast
If the virtual address space is large, the page table will
be large.
For example,
Modern computers use virtual addresses of at least
32-bits. With say, a 4-KB page size, a 32-bit
address space has 1 million pages.
Multi-level page tables

• Want to avoid keeping the entire page table in


memory because it is too big
• Hierarchy of page tables
• The hierarchy is a page table of page tables
Multilevel Page Tables

(a) A 32-bit address with two page table fields.


(b) Two-level page tables.
Virtual Memory
• Demand paging
– Do not require all pages of a process in
memory
– Bring in pages as required
• Page fault
– Required page is not in memory
– Operating System must swap in required
page
– May need to swap out a page to make
space
Thrashing
• Too many processes in too little memory
• Operating System spends all its time
swapping
• Little or no real work is done
• Disk light is on all the time

• Solutions
– Good page replacement algorithms
– Reduce number of processes running
– Fit more memory
Translation Lookaside Buffer
• Every virtual memory reference
causes two physical memory access
– Fetch page table entry
– Fetch data
• Use special cache for page table
– TLB
TLB and Cache Operation
Segmentation

• A program can be subdivided into


segments
 may vary in length
 there is a maximum length
• Addressing consists of two parts:
 segment number
 an offset
• Similar to dynamic partitioning
• Eliminates internal fragmentation
The difference with dynamic partitioning, is that with
segmentation a program may occupy more than one
partition, and these partitions need not be contiguous.

Segmentation eliminates internal fragmentation but


suffers from external fragmentation (as does dynamic
partitioning)

However, because a process is broken up into a


number of smaller pieces, the external fragmentation
should be less.
A consequence of unequal-size segments is that there
is no simple relationship between logical addresses
and physical addresses.
75
• Analogous to paging, a simple segmentation scheme
would make use of a segment table for each process
and a list of free blocks of main memory. Each
segment table entry would have to give
• the starting address in main memory of the
corresponding segment.
• the length of the segment, to assure that invalid
addresses are not used.

• When a process enters the Running state, the


address of its segment table is loaded into a special
register used by the memory management hardware.

76
Segmentation
Paging vs Segmentation

Comparison of paging and segmentation.


Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Page Replacement
Algorithms
• Optimal
• FIFO (First In First Out)
• LFU (Page Based): (Least Frequently
Used)
• LFU (Frame Based)
• LRU(Least Recently Used)
• MFU(Most Frequently Used)

You might also like