A ground-up, interactive walkthrough: from the hardware to the algorithms. By the end, the simulator will make complete sense.
The scheduler decides which of these gets the CPU — and for how long. That decision is everything.
The CPU is the only resource that actually runs your code. Understanding it — and how the OS switches between processes — is the first step to understanding scheduling.
A CPU can run exactly one process at a time per core. It doesn't multitask; it switches between processes so fast it appears simultaneous.
The CPU runs a simple loop forever, billions of times per second:
Fetch — The CPU grabs the next instruction from memory.
Decode — The control unit interprets what that instruction means.
Execute — The CPU performs the action and stores the result.
A program sitting on disk does nothing. The moment the OS runs it, it becomes a process.
A static file on disk: .exe, .out, a Python script.
Just stored bytes. Does nothing until executed.
A running instance of a program. The OS loads it into RAM, gives it resources, and hands it to the CPU.
Multiple processes can run from one program. Three Chrome windows = one program, three independent processes.
Processes don't use the CPU non-stop. They alternate — and this gap is exactly where scheduling matters.
Long bursts, rare I/O. Examples: video encoding, ML training, scientific simulation.
Short bursts, frequent waits. Examples: web servers, databases, file editors.
A process moves through well-defined states. The scheduler controls the most critical transition: Ready to Running.
Now we understand how processes execute.
You've seen the CPU hardware, how programs become processes,
how CPU and I/O bursts alternate, and the states they move through.
Next: how the OS decides which process runs.
Continue to Scheduling Concepts →Now that you understand CPUs, processes, and their states — here is the core problem the OS solves.
CPU Scheduling is the OS policy that decides
which process in the Ready Queue gets the CPU next — and for how
long.
This single decision, made thousands of times per second, determines how fast your system feels, how fairly resources are shared, and whether any process gets starved.
Once a process starts running, it keeps the CPU until it voluntarily gives it up — either by finishing or by requesting I/O. The scheduler cannot interrupt it.
The OS can forcibly remove a process from the CPU at any time — when a higher-priority process arrives, or when a time quantum expires.
Admits programs into the system. Controls degree of multiprogramming. Runs rarely.
The one we study. Runs every millisecond — picks the next process from the ready queue and dispatches it to the CPU.
Swaps processes out of memory when RAM is full. Reloads them later. Manages degree of multiprogramming dynamically.
Without scheduling, computers as you know them simply wouldn't work.
No single algorithm nails all goals at once. Each algorithm prioritises a different criterion. Click any goal on the left to understand it and see which algorithm targets it.
Every algorithm is judged by these numbers. Know them cold.
When the process first enters the ready queue.
Total CPU time the process needs to finish.
The moment the process finishes execution.
Total time spent in the system, from arrival to finish.
Time spent waiting in the ready queue, not executing.
Time until the process gets the CPU for the first time.
Three processes arrive close together. Because P1 runs first, P2 and P3 are forced to wait — their waiting time is non-zero, caused by the process ahead of them.
| Process | AT | BT | CT | TAT = CT−AT | WT = TAT−BT | RT = start−AT |
|---|---|---|---|---|---|---|
| P1 | 0 | 5 | 5 | 5−0 = 5 | 5−5 = 0 | 0−0 = 0 |
| P2 | 0 | 3 | 8 | 8−0 = 8 | 8−3 = 5 ← waited! | 5−0 = 5 |
| P3 | 0 | 6 | 14 | 14−0 = 14 | 14−6 = 8 ← waited! | 8−0 = 8 |
P2 and P3 arrived at t=0 but had to wait because P1 got the CPU first. WT is caused by other processes being ahead. Throughput = 3 ÷ 14 = 0.21 proc/unit.
Now that we know how scheduling works and what
metrics we use to measure it,
let's explore the algorithms used by real operating systems.
Continue to Algorithms →Which one should the OS run first?
Whoever arrives first, runs first. No interruptions. Processes are executed in the order they enter the ready queue. Pure FIFO.
| Proc | AT | BT |
|---|
| Proc | AT | BT | CT | TAT | WT |
|---|
Drag the slider to change P1's burst time. See how one long process blocks everything behind it.
When CPU is free, pick the process with the smallest burst time. Once started, runs to completion.
If a new arrival has a smaller remaining burst time than the current process, preempt it immediately. The moment of preemption is highlighted in the log below.
| Proc | AT | BT |
|---|
| Proc | AT | BT | CT | TAT | WT |
|---|
P_long arrived at t=0 with BT=20. Short processes keep arriving. Watch P_long never get scheduled.
τ(n+1) = α·t(n) + (1−α)·τ(n) where t(n) is the actual last burst and τ(n) is the
estimate. This is why SJF is theoretically optimal but rarely used in pure form — real schedulers approximate
it.
Each process has a priority number. The CPU always picks the highest-priority process. Convention: lower number = higher priority (P=1 beats P=4).
Non-preemptive: once running, finishes even if higher priority arrives.
Preemptive: a higher-priority arrival immediately kicks out the current process.
| Proc | AT | BT | Prio |
|---|
| Proc | AT | BT | Prio | CT | TAT | WT |
|---|
Every process gets a fixed time slice (quantum Q). After Q units, it's preempted and sent to the back of the queue. Designed for fairness: no process waits forever.
| Process | Arrival Time | Burst Time |
|---|
| Process | AT | BT | CT | TAT | WT |
|---|
There's no universal answer. It depends on what you're optimising for. Use this to figure it out.
Real OS schedulers (Linux CFS, Windows Priority Scheduler) combine these ideas. Search:
→ "Linux Completely Fair Scheduler"
→ "Multilevel Feedback Queue"
→ Or run the same process set through all algorithms in the simulator below.
You've built up from the CPU to context switches, states, metrics, and all four algorithms. Now put it all together.