Professional Documents
Culture Documents
CPU
• Compromise: virtual-physical caches • Two ways to look at VA 0
• Indexed by VAs • Cache: TAG+IDX+OFS 1
VA VA • TLB: VPN+POFS 2
• Tagged by PAs
==
TLB I$ D$ TLB • Cache access and address translation in parallel ==
PA • Can have parallel cache &
+ No context-switching/aliasing problems 1022
TLB …
L2 + Fast: no additional thit cycles • If address translation
== 1023
doesn’t change IDX
PA • A TB that acts in parallel with a cache is a TLB • Æ VPN/IDX don’t overlap ==
Main • Translation Lookaside Buffer TLB
Memory TLB hit/miss cache
• Common organization in processors today cache hit/miss
[31:12] [11:2] 1:0 <<
VPN [31:16] POFS[15:0]
address data
ECE 152 57 ECE 152 58
© 2008 Daniel J. Sorin from Roth © 2008 Daniel J. Sorin from Roth
Cache Size And Page Size TLB Organization
[31:12] IDX[11:2] 1:0 • Like caches: TLBs also have ABCs
VPN [31:16] [15:0]
• What does it mean for a TLB to have a block size of two?
• Two consecutive VPs share a single tag
• Relationship between page size and L1 I$(D$) size
• Forced by non-overlap between VPN and IDX portions of VA
• Rule of thumb: TLB should “cover” L2 contents
• Which is required for TLB access
• In other words: #PTEs * page size ≥ L2 size
• I$(D$) size / associativity ≤ page size
• Why? Think about this …
• Big caches must be set associative
• Big cache Æ more index bits (fewer tag bits)
• More set associative Æ fewer index bits (more tag bits)
• Systems are moving towards bigger (64KB) pages
• To amortize disk latency
• To accommodate bigger caches