Using paging as the core mechanism to support virtual memory can lead
tohighperformanceoverheads. Bychoppingtheaddressspaceintosmall,
fixed-sized units (i.e., pages), paging requires a large amount of mapping
information. Because that mapping information is generally stored in
physical memory, paging logically requires an extra memory lookup for
each virtual address generated by the program. Going to memory for
translation information before every instruction fetch or explicit load or
store is prohibitively slow. And thus our problem:
THE CRUX:
HOW TO SPEED UP ADDRESS TRANSLATION
How can we speed up address translation, and generally avoid the
extra memory reference that paging seems to require? What hardware
support is required? What OS involvement is needed?
When we want to make things fast, the OS usually needs some help.
And help often comes from the OS’s old friend: the hardware. To speed
address translation, we are going to add what is called (for historical rea-
sons [CP78]) a translation-lookaside buffer, or TLB [CG68, C95]. A TLB
is part of the chip’s memory-management unit (MMU), and is simply a
hardware cache of popular virtual-to-physical address translations; thus,
a better name would be an address-translation cache. Upon each virtual
memory reference, the hardware first checks the TLB to see if the desired
translation is held therein; if so, the translation is performed (quickly)
without having to consult the page table (which has all translations). Be-
causeoftheirtremendousperformanceimpact, TLBsinarealsensemake
virtual memory possible [C95].
19.1 TLB Basic Algorithm
Figure 19.1 shows a rough sketch of how hardware might handle a
virtual address translation, assuming a simple linear page table (i.e., the
page table is an array) and a hardware-managed TLB (i.e., the hardware
handles much of the responsibility of page table accesses; we’ll explain
more about this below).
1
2 PAGING: FASTER TRANSLATIONS (TLBS)
1 VPN = (VirtualAddress VPN_MASK) >> SHIFT
2 (Success, TlbEntry) = TLB_Lookup(VPN)
3 if (Success == True) // TLB Hit
4 if (CanAccess(TlbEntry.ProtectBits) == True)
5 ffset = VirtualAddress OFFSET_MASK
6 PhysAddr = (TlbEntry.PFN > SHIFT
2 (Success, TlbEntry) = TLB_Lookup(VPN)
3 if (Success == True) // TLB Hit
4 if (CanAccess(TlbEntry.ProtectBits) == True)
5 ffset = VirtualAddress OFFSET_MASK
6 PhysAddr = (TlbEntry.PFN << SHIFT) | Offset
7 Register = AccessMemory(PhysAddr)
8 else
9 RaiseException(PROTECTION_FAULT)
10 else // TLB Miss
11 RaiseException(TLB_MISS)
Figure 19.3: TLB Control Flow Algorithm (OS Handled)
More modern architectures (e.g., MIPS R10k [H93] or Sun’s SPARC v9
[WG00], both RISC or reduced-instruction set computers) have what is
known as a software-managed TLB. On a TLB miss, the hardware sim-
ply raises an exception (line 11 in Figure 19.3), which pauses the current
instruction stream, raises the privilege level to kernel mode, and jumps
to a trap handler. As you might guess, this trap handler is code within
the OS that is written with the express purpose of handling TLB misses.
When run, the code will lookup the translation in the page table, use spe-
cial“privileged”instructionstoupdatetheTLB,andreturnfromthetrap;
at this point, the hardware retries the instruction (resulting in a TLB hit).
Let’s discuss a couple of important details. First, the return-from-trap
instruction needs to be a little different than the return-from-trap we saw
before when servicing a system call. In the latter case, the return-from-
trap should resume execution at the instruction after the trap into the OS,
just as a return from a procedure call returns to the instruction imme-
diately following the call into the procedure. In the former case, when
returning from a TLB miss-handling trap, the hardware must resume ex-
ecution at the instruction that caused the trap; this retry thus lets the in-
struction run again, this time resulting in a TLB hit. Thus, depending on
how a trap or exception was caused, the hardware must save a different
PCwhentrappingintotheOS,inordertoresumeproperlywhenthetime
to do so arrives.
Second,whenrunningtheTLBmiss-handlingcode,theOSneedstobe
extra careful not to cause an infinite chain of TLB misses to occur. Many
solutions exist; for example, you could keep TLB miss handlers in physi-
cal memory (where they are unmapped and not subject to address trans-
lation), or reserve some entries in the TLB for permanently-valid transla-
tions and use some of those permanent translation slots for the handler
code itself; these wired translations always hit in the TLB.
The primary advantage of the software-managed approach is flexibil-
ity: the OS can use any data structure it wants to implement the page
table, without necessitating hardware change. Another advantage is sim-
plicity; as you can see in the TLB control flow (line 11 in Figure 19.3, in
contrast to lines 11–19 in Figure 19.1), the hardware doesn’t have to do
much on a miss; it raises an exception, and the OS TLB miss handler does
the rest.
OPERATING
SYSTEMS
[VERSION 0.92] WWW.OSTEP.ORG
PAGING: FASTER TRANSLATIONS (TLBS) 7
ASIDE: RISC VS. CISC
In the 1980’s, a great battle took place in the computer architecture com-
munity. On one side was the CISC camp, which stood for Complex
Instruction Set Computing; on the other side was RISC, for Reduced
Instruction Set Computing [PS81]. The RISC side was spear-headed by
DavidPattersonatBerkeleyandJohnHennessyatStanford(whoarealso
co-authorsofsomefamousbooks[HP06]),althoughlaterJohnCockewas
recognized with a Turing award for his earliest work on RISC [CM00].
CISC instruction sets tend to have a lot of instructions in them, and each
instruction is relatively powerful. For example, you might see a string
copy, whichtakestwopointersandalengthandcopiesbytesfromsource
to destination. The idea behind CISC was that instructions should be
high-level primitives, to make the assembly language itself easier to use,
and to make code more compact.
RISC instruction sets are exactly the opposite. A key observation behind
RISC is that instruction sets are really compiler targets, and all compil-
ers really want are a few simple primitives that they can use to gener-
ate high-performance code. Thus, RISC proponents argued, let’s rip out
as much from the hardware as possible (especially the microcode), and
make what’s left simple, uniform, and fast.
Intheearlydays,RISCchipsmadeahugeimpact,astheywerenoticeably
faster [BC91]; many papers were written; a few companies were formed
(e.g., MIPS and Sun). However, as time progressed, CISC manufacturers
such as Intel incorporated many RISC techniques into the core of their
processors, for example by adding early pipeline stages that transformed
complex instructions into micro-instructions which could then be pro-
cessed in a RISC-like manner. These innovations, plus a growing number
of transistors on each chip, allowed CISC to remain competitive. The end
result is that the debate died down, and today both types of processors
can be made to run fast.
19.4 TLB Contents: What’s In There?
Let’slookatthecontentsofthehardwareTLBinmoredetail. Atypical
TLB might have 32, 64, or 128 entries and be what is called fully associa-
tive. Basically, thisjustmeansthatanygiventranslationcanbeanywhere
in the TLB, and that the hardware will search the entire TLB in parallel to
find the desired translation. A TLB entry might look like this:
VPN PFN other bits
Note that both the VPN and PFN are present in each entry, as a trans-
lation could end up in any of these locations (in hardware terms, the TLB
is known as a fully-associative cache). The hardware searches the entries
in parallel to see if there is a match.
c©2014, ARPACI-DUSSEAU
THREE
EASY
PIECES
8 PAGING: FASTER TRANSLATIONS (TLBS)
ASIDE: TLB VALID BITnegationslash= PAGE TABLE VALID BIT
A common mistake is to confuse the valid bits found in a TLB with
those found in a page table. In a page table, when a page-table entry
(PTE) is marked invalid, it means that the page has not been allocated by
the process, and should not be accessed by a correctly-working program.
The usual response when an invalid page is accessed is to trap to the OS,
which will respond by killing the process.
ATLBvalidbit, incontrast, simplyreferstowhetheraTLBentryhasa
valid translation within it. When a system boots, for example, a common
initial state for each TLB entry is to be set to invalid, because no address
translations are yet cached there. Once virtual memory is enabled, and
once programs start running and accessing their virtual address spaces,
the TLB is slowly populated, and thus valid entries soon fill the TLB.
TheTLBvalidbitisquiteusefulwhenperformingacontextswitchtoo,
as we’ll discuss further below. By setting all TLB entries to invalid, the
system can ensure that the about-to-be-run process does not accidentally
use a virtual-to-physical translation from a previous process.
More interesting are the “other bits”. For example, the TLB commonly
has a valid bit, which says whether the entry has a valid translation or
not. Also common are protection bits, which determine how a page can
be accessed (as in the page table). For example, code pages might be
marked read and execute, whereas heap pages might be marked read and
write. There may also be a few other fields, including an address-space
identifier, a dirty bit, and so forth; see below for more information.
19.5 TLB Issue: Context Switches
With TLBs, some new issues arise when switching between processes
(andhenceaddressspaces). Specifically,theTLBcontainsvirtual-to-physical
translations that are only valid for the currently running process; these
translations are not meaningful for other processes. As a result, when
switchingfromoneprocesstoanother,thehardwareorOS(orboth)must
becarefultoensurethattheabout-to-be-runprocessdoesnotaccidentally
use translations from some previously run process.
Tounderstandthissituationbetter, let’slookatanexample. Whenone
process (P1) is running, it assumes the TLB might be caching translations
that are valid for it, i.e., that come from P1’s page table. Assume, for this
example,thatthe10thvirtualpageofP1ismappedtophysicalframe100.
In this example, assume another process (P2) exists, and the OS soon
might decide to perform. a context switch and run it. Assume here that
the 10th virtual page of P2 is mapped to physical frame. 170. If entries for
both processes were in the TLB, the contents of the TLB would be:
OPERATING
SYSTEMS
[VERSION 0.92] WWW.OSTEP.ORG
PAGING: FASTER TRANSLATIONS (TLBS) 9
VPN PFN valid prot
10 100 1 rwx
— — 0 —
10 170 1 rwx
— — 0 —
In the TLB above, we clearly have a problem: VPN 10 translates to
either PFN 100 (P1) or PFN 170 (P2), but the hardware can’t distinguish
which entry is meant for which process. Thus, we need to do some more
work in order for the TLB to correctly and efficiently support virtualiza-
tion across multiple processes. And thus, a crux:
THE CRUX:
HOW TO MANAGE TLB CONTENTS ON A CONTEXT SWITCH
When context-switching between processes, the translations in the TLB
for the last process are not meaningful to the about-to-be-run process.
What should the hardware or OS do in order to solve this problem?
There are a number of possible solutions to this problem. One ap-
proach is to simply flush the TLB on context switches, thus emptying
it before running the next process. On a software-based system, this
can be accomplished with an explicit (and privileged) hardware instruc-
tion; withahardware-managedTLB,theflushcouldbeenactedwhenthe
page-table base register is changed (note the OS must change the PTBR
on a context switch anyhow). In either case, the flush operation simply
sets all valid bits to 0, essentially clearing the contents of the TLB.
By flushing the TLB on each context switch, we now have a working
solution, as a process will never accidentally encounter the wrong trans-
lations in the TLB. However, there is a cost: each time a process runs, it
must incur TLB misses as it touches its data and code pages. If the OS
switches between processes frequently, this cost may be high.
To reduce this overhead, some systems add hardware support to en-
able sharing of the TLB across context switches. In particular, some hard-
ware systems provide an address space identifier (ASID) field in the
TLB. You can think of the ASID as a process identifier (PID), but usu-
ally it has fewer bits (e.g., 8 bits for the ASID versus 32 bits for a PID).
If we take our example TLB from above and add ASIDs, it is clear
processes can readily share the TLB: only the ASID field is needed to dif-
ferentiate otherwise identical translations. Here is a depiction of a TLB
with the added ASID field:
VPN PFN valid prot ASID
10 100 1 rwx 1
— — 0 — —
10 170 1 rwx 2
— — 0 — —
c©2014, ARPACI-DUSSEAU
THREE
EASY
PIECES
10 PAGING: FASTER TRANSLATIONS (TLBS)
Thus, with address-space identifiers, the TLB can hold translations
from different processes at the same time without any confusion. Of
course, the hardware also needs to know which process is currently run-
ning in order to perform. translations, and thus the OS must, on a context
switch, set some privileged register to the ASID of the current process.
As an aside, you may also have thought of another case where two
entries of the TLB are remarkably similar. In this example, there are two
entries for two different processes with two different VPNs that point to
the same physical page:
VPN PFN valid prot ASID
10 101 1 r-x 1
— — 0 — —
50 101 1 r-x 2
— — 0 — —
This situation might arise, for example, when two processes share a
page (a code page, for example). In the example above, Process 1 is shar-
ing physical page 101 with Process 2; P1 maps this page into the 10th
page of its address space, whereas P2 maps it to the 50th page of its ad-
dress space. Sharing of code pages (in binaries, or shared libraries) is
useful as it reduces the number of physical pages in use, thus reducing
memory overheads.
19.6 Issue: Replacement Policy
As with any cache, and thus also with the TLB, one more issue that we
must consider is cache replacement. Specifically, when we are installing
a new entry in the TLB, we have to replace an old one, and thus the
question: which one to replace?
THE CRUX: HOW TO DESIGN TLB REPLACEMENT POLICY
Which TLB entry should be replaced when we add a new TLB entry?
The goal, of course, being to minimize the miss rate (or increase hit rate)
and thus improve performance.
Wewillstudysuchpoliciesinsomedetailwhenwetackletheproblem
of swapping pages to disk; here we’ll just highlight a few typical policies.
One common approach is to evict the least-recently-used or LRU entry.
LRU tries to take advantage of locality in the memory-reference stream,
assumingitislikelythatanentrythathasnotrecentlybeenusedisagood
candidate for eviction. Another typical approach is to use a random pol-
icy, which evicts a TLB mapping at random. Such a policy is useful due
to its simplicity and ability to avoid corner-case behaviors; for example,
a “reasonable” policy such as LRU behaves quite unreasonably when a
program loops over n + 1 pages with a TLB of size n; in this case, LRU
misses upon every access, whereas random does much better.
OPERATING
SYSTEMS
[VERSION 0.92] WWW.OSTEP.ORG
PAGING: FASTER TRANSLATIONS (TLBS) 11
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
VPN G ASID
PFN C D V
Figure 19.4: A MIPS TLB Entry
19.7 A Real TLB Entry
Finally, let’s briefly look at a real TLB. This example is from the MIPS
R4000[H93],amodernsystemthatusessoftware-managedTLBs;aslightly
simplified MIPS TLB entry can be seen in Figure 19.4.
TheMIPSR4000supportsa32-bitaddressspacewith4KBpages. Thus,
we would expect a 20-bit VPN and 12-bit offset in our typical virtual ad-
dress. However, as you can see in the TLB, there are only 19 bits for the
VPN; as it turns out, user addresses will only come from half the address
space (the rest reserved for the kernel) and hence only 19 bits of VPN
are needed. The VPN translates to up to a 24-bit physical frame. number
(PFN),andhencecansupportsystemswithupto64GBof(physical)main
memory (224 4KB pages).
There are a few other interesting bits in the MIPS TLB. We see a global
bit(G),whichisusedforpagesthatareglobally-sharedamongprocesses.
Thus, if the global bit is set, the ASID is ignored. We also see the 8-bit
ASID, which the OS can use to distinguish between address spaces (as
described above). One question for you: what should the OS do if there
are more than 256 (28) processes running at a time? Finally, we see 3
Coherence(C)bits,whichdeterminehowapageiscachedbythehardware
(a bit beyond the scope of these notes); a dirty bit which is marked when
the page has been written to (we’ll see the use of this later); a valid bit
whichtellsthehardwareifthereisavalidtranslationpresentintheentry.
There is also a page mask field (not shown), which supports multiple page
sizes; we’ll see later why having larger pages might be useful. Finally,
some of the 64 bits are unused (shaded gray in the diagram).
MIPS TLBs usually have 32 or 64 of these entries, most of which are
used by user processes as they run. However, a few are reserved for the
OS. A wired register can be set by the OS to tell the hardware how many
slots of the TLB to reserve for the OS; the OS uses these reserved map-
pingsforcodeanddatathatitwantstoaccessduringcriticaltimes,where
a TLB miss would be problematic (e.g., in the TLB miss handler).
Because the MIPS TLB is software managed, there needs to be instruc-
tionstoupdatetheTLB.TheMIPSprovidesfoursuchinstructions: TLBP,
which probes the TLB to see if a particular translation is in there; TLBR,
which reads the contents of a TLB entry into registers; TLBWI, which re-
places a specific TLB entry; and TLBWR, which replaces a random TLB
entry. The OS uses these instructions to manage the TLB’s contents. It is
of course critical that these instructions are privileged; imagine what a
user process could do if it could modifythe contents of the TLB(hint: just
about anything, including take over the machine, run its own malicious
“OS”, or even make the Sun disappear).
c©2014, ARPACI-DUSSEAU
THREE
EASY
PIECES
12 PAGING: FASTER TRANSLATIONS (TLBS)
TIP: RAM ISN’T ALWAYS RAM (CULLER’S LAW)
The term random-access memory, or RAM, implies that you can access
any part of RAM just as quickly as another. While it is generally good to
think of RAM in this way, because of hardware/OS features such as the
TLB, accessing a particular page of memory may be costly, particularly if
that page isn’t currently mapped by your TLB. Thus, it is always good to
remember the implementation tip: RAM isn’t always RAM. Sometimes
randomlyaccessingyouraddressspace,particularifthenumberofpages
accessedexceedstheTLBcoverage,canleadtosevereperformancepenal-
ties. Because one of our advisors, David Culler, used to always point to
the TLB as the source of many performance problems, we name this law
in his honor: Culler’s Law.
19.8 Summary
We have seen how hardware can help us make address translation
faster. Byprovidingasmall,dedicatedon-chipTLBasanaddress-translation
cache, most memory references will hopefully be handled without having
to access the page table in main memory. Thus, in the common case,
the performance of the program will be almost as if memory isn’t being
virtualized at all, an excellent achievement for an operating system, and
certainly essential to the use of paging in modern systems.
However, TLBs do not make the world rosy for every program that
exists. In particular, if the number of pages a program accesses in a short
period of time exceeds the number of pages that fit into the TLB, the pro-
gram will generate a large number of TLB misses, and thus run quite a
bit more slowly. We refer to this phenomenon as exceeding the TLB cov-
erage, and it can be quite a problem for certain programs. One solution,
as we’ll discuss in the next chapter, is to include support for larger page
sizes; by mapping key data structures into regions of the program’s ad-
dress space that are mapped by larger pages, the effective coverage of the
TLB can be increased. Support for large pages is often exploited by pro-
grams such as a database management system (a DBMS), which have
certain data structures that are both large and randomly-accessed.
One other TLB issue worth mentioning: TLB access can easily be-
come a bottleneck in the CPU pipeline, in particular with what is called a
physically-indexed cache. With such a cache, address translation has to
take place before the cache is accessed, which can slow things down quite
a bit. Because of this potential problem, people have looked into all sorts
of clever ways to access caches with virtual addresses, thus avoiding the
expensive step of translation in the case of a cache hit. Such a virtually-
indexed cache solves some performance problems, but introduces new
issues into hardware design as well. See Wiggins’s fine survey for more
details [W03].
OPERATING
SYSTEMS
[VERSION 0.92] WWW.OSTEP.ORG
PAGING: FASTER TRANSLATIONS (TLBS) 13
References
[BC91] “Performance from Architecture: Comparing a RISC and a CISC
with Similar Hardware Organization”
D. Bhandarkar and Douglas W. Clark
Communications of the ACM, September 1991
A great and fair comparison between RISC and CISC. The bottom line: on similar hardware, RISC was
about a factor of three better in performance.
[CM00] “The evolution of RISC technology at IBM”
John Cocke and V. Markstein
IBM Journal of Research and Development, 44:1/2
A summary of the ideas and work behind the IBM 801, which many consider the first true RISC micro-
processor.
[C95] “The Core of the Black Canyon Computer Corporation”
John Couleur
IEEE Annals of History of Computing, 17:4, 1995
In this fascinating historical note, Couleur talks about how he invented the TLB in 1964 while working
for GE, and the fortuitous collaboration that thus ensued with the Project MAC folks at MIT.
[CG68] “Shared-access Data Processing System”
John F. Couleur and Edward L. Glaser
Patent 3412382, November 1968
The patent that contains the idea for an associative memory to store address translations. The idea,
according to Couleur, came in 1964.
[CP78] “The architecture of the IBM System/370”
R.P. Case and A. Padegs
Communications of the ACM. 21:1, 73-96, January 1978
Perhaps the first paper to use the term translation lookaside buffer. The name arises from the his-
torical name for a cache, which was a lookaside buffer as called by those developing the Atlas system
at the University of Manchester; a cache of address translations thus became a translation lookaside
buffer. Even though the term lookaside buffer fell out of favor, TLB seems to have stuck, for whatever
reason.
[H93] “MIPS R4000 Microprocessor User’s Manual”.
Joe Heinrich, Prentice-Hall, June 1993
Available: http://cag.csail.mit.edu/raw/
documents/R4400 Uman book Ed2.pdf
[HP06] “Computer Architecture: A Quantitative Approach”
John Hennessy and David Patterson
Morgan-Kaufmann, 2006
A great book about computer architecture. We have a particular attachment to the classic first edition.
[I09] “Intel 64 and IA-32 Architectures Software Developer’s Manuals”
Intel, 2009
Available: http://www.intel.com/products/processor/manuals
In particular, pay attention to “Volume 3A: System Programming Guide Part 1” and “Volume 3B:
System Programming Guide Part 2”
[PS81] “RISC-I: A Reduced Instruction Set VLSI Computer”
D.A. Patterson and C.H. Sequin
ISCA ’81, Minneapolis, May 1981
The paper that introduced the term RISC, and started the avalanche of research into simplifying com-
puter chips for performance.
c©2014, ARPACI-DUSSEAU
THREE
EASY
PIECES
14 PAGING: FASTER TRANSLATIONS (TLBS)
[SB92] “CPU Performance Evaluation and Execution Time Prediction
Using Narrow Spectrum Benchmarking”
Rafael H. Saavedra-Barrera
EECS Department, University of California, Berkeley
Technical Report No. UCB/CSD-92-684, February 1992
www.eecs.berkeley.edu/Pubs/TechRpts/1992/CSD-92-684.pdf
A great dissertation about how to predict execution time of applications by breaking them down into
constituent pieces and knowing the cost of each piece. Probably the most interesting part that comes out
of this work is the tool to measure details of the cache hierarchy (described in Chapter 5). Make sure to
check out the wonderful diagrams therein.
[W03] “A Survey on the Interaction Between Caching, Translation and Protection”
Adam Wiggins
University of New South Wales TR UNSW-CSE-TR-0321, August, 2003
Anexcellent survey ofhowTLBsinteract withother partsoftheCPUpipeline, namelyhardware caches.
[WG00] “The SPARC Architecture Manual: Version 9”
David L. Weaver and Tom Germond, September 2000
SPARC International, San Jose, California
Available: http://www.sparc.org/standards/SPARCV9.pdf
OPERATING
SYSTEMS
[VERSION 0.92] WWW.OSTEP.ORG
PAGING: FASTER TRANSLATIONS (TLBS) 15
Homework (Measurement)
In this homework, you are to measure the size and cost of accessing
a TLB. The idea is based on work by Saavedra-Barrera [SB92], who de-
veloped a simple but beautiful method to measure numerous aspects of
cache hierarchies, all with a very simple user-level program. Read his
work for more details.
The basic idea is to access some number of pages within a large data
structure(e.g., anarray)andtotimethose accesses. Forexample, let’ssay
the TLB size of a machine happens to be 4 (which would be very small,
but useful for the purposes of this discussion). If you write a program
that touches 4 or fewer pages, each access should be a TLB hit, and thus
relatively fast. However, once you touch 5 pages or more, repeatedly in a
loop, each access will suddenly jump in cost, to that of a TLB miss.
The basic code to loop through an array once should look like this:
int jump = PAGESIZE / sizeof(int);
for (i = 0; i < NUMPAGES * jump; i += jump) {
a[i] += 1;
}
In this loop, one integer per page of the array a is updated, up to the
number of pages specified by NUMPAGES. By timing such a loop repeat-
edly(say,afewhundredmilliontimesinanotherlooparoundthisone,or
however many loops are needed to run for a few seconds), you can time
how long each access takes (on average). By looking for jumps in cost as
NUMPAGES increases, you can roughly determine how big the first-level
TLB is, determine whether a second-level TLB exists (and how big it is if
it does), and in general get a good sense of how TLB hits and misses can
affect performance.
Figure 19.5 (page 16) shows the average time per access as the number
of pages accessed in the loop is increased. As you can see in the graph,
when just a few pages are accessed (8 or fewer), the average access time
is roughly 5 nanoseconds. When 16 or more pages are accessed, there is
a sudden jump to about 20 nanoseconds per access. A final jump in cost
occurs at around 1024 pages, at which point each access takes around 70
nanoseconds. From this data, we can conclude that there is a two-level
TLB hierarchy; the first is quite small (probably holding between 8 and
16 entries); the second is larger but slower (holding roughly 512 entries).
The overall difference between hits in the first-level TLB and misses is
quite large, roughly a factor of fourteen. TLB performance matters!