You are on page 1of 2

Virtual Caches Physical Caches

• Memory hierarchy so far: virtual caches • Alternatively: physical caches


CPU CPU
• Indexed and tagged by VAs • Indexed and tagged by PAs
VA VA • Translate to PAs only to access memory VA VA • Translate to PA at the outset
I$ D$ + Fast: avoids translation latency in common case TB TB + No need to flush caches on process switches
VA PA PA • Processes do not share PAs
• What to do on process switches? + Cached inter-process communication works
L2 I$ D$ • Single copy indexed by PA
• Flush caches? Slow PA
VA – Slow: adds 1 cycle to thit
• Add process IDs to cache tags
TB L2

PA • Does inter-process communication work? PA


Main • Aliasing: multiple VAs map to same PA Main
Memory • How are multiple cache copies kept in sync? Memory
• Also a problem for I/O (later in course)
• Disallow caching of shared memory? Slow

ECE 152 55 ECE 152 56


© 2008 Daniel J. Sorin from Roth © 2008 Daniel J. Sorin from Roth

Virtual Physical Caches Cache/TLB Access

CPU
• Compromise: virtual-physical caches • Two ways to look at VA 0
• Indexed by VAs • Cache: TAG+IDX+OFS 1
VA VA • TLB: VPN+POFS 2
• Tagged by PAs
==
TLB I$ D$ TLB • Cache access and address translation in parallel ==
PA • Can have parallel cache &
+ No context-switching/aliasing problems 1022
TLB …
L2 + Fast: no additional thit cycles • If address translation
== 1023
doesn’t change IDX
PA • A TB that acts in parallel with a cache is a TLB • Æ VPN/IDX don’t overlap ==
Main • Translation Lookaside Buffer TLB
Memory TLB hit/miss cache
• Common organization in processors today cache hit/miss
[31:12] [11:2] 1:0 <<
VPN [31:16] POFS[15:0]

address data
ECE 152 57 ECE 152 58
© 2008 Daniel J. Sorin from Roth © 2008 Daniel J. Sorin from Roth
Cache Size And Page Size TLB Organization
[31:12] IDX[11:2] 1:0 • Like caches: TLBs also have ABCs
VPN [31:16] [15:0]
• What does it mean for a TLB to have a block size of two?
• Two consecutive VPs share a single tag
• Relationship between page size and L1 I$(D$) size
• Forced by non-overlap between VPN and IDX portions of VA
• Rule of thumb: TLB should “cover” L2 contents
• Which is required for TLB access
• In other words: #PTEs * page size ≥ L2 size
• I$(D$) size / associativity ≤ page size
• Why? Think about this …
• Big caches must be set associative
• Big cache Æ more index bits (fewer tag bits)
• More set associative Æ fewer index bits (more tag bits)
• Systems are moving towards bigger (64KB) pages
• To amortize disk latency
• To accommodate bigger caches

ECE 152 59 ECE 152 60


© 2008 Daniel J. Sorin from Roth © 2008 Daniel J. Sorin from Roth

Flavors of Virtual Memory Summary


• Virtual memory almost ubiquitous today • DRAM
• Certainly in general-purpose (in a computer) processors • Two-level addressing
• But even some embedded (in non-computer) processors support it • Refresh, access time, cycle time
• Building a memory system
• Several forms of virtual memory • DRAM/bus bandwidth matching
• Paging (aka flat memory): equal sized translation blocks • Memory organization
• Most systems do this • Virtual memory
• Segmentation: variable sized (overlapping?) translation blocks • Page tables and address translation
• IA32 uses this • Page faults and handling
• Makes life very difficult • Virtual, physical, and virtual-physical caches and TLBs
• Paged segments: don’t ask
Next part of course: I/O

ECE 152 61 ECE 152 62


© 2008 Daniel J. Sorin from Roth © 2008 Daniel J. Sorin from Roth

You might also like