Understand page replacement from the ground up, through animation.
$ load_module --mode interactive --depth full
█
PHASE 1 / FOUNDATIONS
MODULE 01
KEY TERMS
Three building blocks. Everything else follows from these.
📄
PAGE
A fixed-size block of virtual memory. Every process sees its memory as a sequence of pages. Typical size: 4 KB. Identified by a page number (p).
🗂
FRAME
A fixed-size slot in physical RAM. Same size as a page. The OS maps pages into frames. When all frames are occupied, one must be evicted before a new page can load.
🗺
PAGE TABLE
A per-process OS structure mapping each page to a frame. Each entry carries a valid bit. If the bit is unset, that page is not in RAM and the next access triggers a fault.
🔄
PAGE REPLACEMENT
When all frames are full and a new page is needed, the OS picks one existing page to remove. The algorithm governing that choice is a page replacement algorithm. A poor choice causes an immediate fault on the evicted page's next access. Minimising faults is the goal.
MODULE 02
PAGE FAULT CYCLE
A page fault is not a crash. It is a controlled 6-step OS process. Step through it.
STEP-THROUGH: 6-Step Fault Handling
Step 0 / 6
① reference
›
② trap
›
③ find on disk
›
④ load page
›
⑤ update table
›
⑥ restart
⬛
PROCESS
running · load M
<->
🖥
OS + PAGE TABLE
—
p=0f=2 ✓
p=M✗ invalid
p=2f=5 ✓
<->
▦
PHYSICAL RAM
F0 · p0
F1 · p2
F2 · FREE
F3 · p1
<->
⬡
BACKING STORE
page M
page X
page Y
—
Press PLAY or STEP to walk through the page fault handling cycle.
You now understand what a page fault is and how the OS handles it.
↓
Before the algorithms: two numbers you need to know: page hit and hit ratio.
MODULE 03
KEY METRICS
How we measure whether a page replacement algorithm is doing well.
✓
PAGE HIT
The requested page is already in a RAM frame. No disk access needed. The CPU gets its data immediately. Every hit saves a slow disk read.
FAST · RAM ACCESS
✗
PAGE FAULT
The requested page is not in RAM. The OS must fetch it from disk, which takes thousands of times longer than a RAM access. Algorithms exist to minimise these.
SLOW · DISK ACCESS
≈
HIT RATIO
Hit Ratio = Hits / (Hits + Faults)
The fraction of memory accesses that were served directly from RAM. Closer to 1.0 is better. A hit ratio of 0.75 means 3 in 4 accesses avoided disk entirely.
e.g.8 hits, 4 faults → hit ratio = 0.67
Foundations complete. You know pages, frames, faults, and how they are measured.
↓
Now: three different algorithms. Each answers the same question differently: which page do you evict?
PHASE 2 / ALGORITHMS
MODULE 04
FIFO
First-In, First-Out
The oldest page in memory is always the first to go. No questions asked.
FIFO maintains a queue of pages ordered by when they were loaded. On a page fault, it evicts the page at the front: the one sitting in RAM the longest. New pages join the back of the queue. Simple, predictable, and zero overhead.
Extremely simple to implement using a basic queue data structure.
No runtime overhead. The OS never tracks how often pages are used.
Fully predictable eviction order, easy to reason about.
− DISADVANTAGES
Evicts by age, not usefulness. A heavily-used page loaded early gets kicked before a useless recent one.
Suffers from Belady's Anomaly: adding more frames can paradoxically increase faults with certain reference strings.
Worst real-world performance of the three algorithms.
⚠BELADY'S ANOMALY
Normally, giving an algorithm more frames means fewer page faults. With FIFO, this isn't guaranteed. On certain reference strings, increasing from 3 to 4 frames actually causes more faults. This counterintuitive behaviour is called Belady's Anomaly and it is unique to FIFO. LRU and Optimal never exhibit it.
3 frames
9 faults
4 frames
10 faults ↑
ref: 1 2 3 4 1 2 5 1 2 3 4 5. More frames, more faults.
MODULE 05
LRU
Least Recently Used
The page unused for the longest time is the first to go.
LRU tracks when each page was last accessed. On a fault, it evicts the page whose last use was furthest in the past: the coldest page. This exploits temporal locality: if a page was used recently, it's likely to be used again soon. LRU never suffers from Belady's Anomaly.
Exploits temporal locality: recently used pages are likely to be used again soon.
No Belady's Anomaly. More frames always means equal or fewer faults.
Comes close to Optimal performance on real workloads.
− DISADVANTAGES
Requires tracking the last-use timestamp of every page, using either a counter or a stack per process.
Higher overhead than FIFO; efficient implementation needs hardware support.
Still not optimal. It uses the past as a proxy for the future, which is not always accurate.
MODULE 06
OPTIMAL
Belady's Algorithm (OPT)
Evict the page not needed for the longest time. Requires knowing the future.
On a fault, OPT looks ahead in the reference string and evicts the page whose next use is furthest away. If a page is never referenced again, it's evicted immediately. This produces the absolute minimum number of page faults possible for any reference string and frame count, but it requires complete foreknowledge of the future, making it impossible to implement in a real OS. It exists purely as a theoretical benchmark.