1
Foundations
2
Scheduling
3
Algorithms
Simulator
Operating Systems

CPU
Scheduling

A ground-up, interactive walkthrough: from the hardware to the algorithms. By the end, the simulator will make complete sense.

FCFS SJF / SRTF Priority Round Robin
Multiple processes — one CPU — who runs next? ● LIVE
CPU →
waiting...

The scheduler decides which of these gets the CPU — and for how long. That decision is everything.

Phase 1 · Foundations

How the CPU Executes Instructions

The CPU is the only resource that actually runs your code. Understanding it — and how the OS switches between processes — is the first step to understanding scheduling.

THE ONE RULE

A CPU can run exactly one process at a time per core. It doesn't multitask; it switches between processes so fast it appears simultaneous.

WHAT IT DOES

The CPU runs a simple loop forever, billions of times per second:

① FETCH ② DECODE ③ EXECUTE

Fetch — The CPU grabs the next instruction from memory.

Decode — The control unit interprets what that instruction means.

Execute — The CPU performs the action and stores the result.

CPU: Live Execution Model
Foundation 02

Program → Process

A program sitting on disk does nothing. The moment the OS runs it, it becomes a process.

The Transformation: Watch it happen
PROGRAM

A static file on disk: .exe, .out, a Python script. Just stored bytes. Does nothing until executed.

PROCESS

A running instance of a program. The OS loads it into RAM, gives it resources, and hands it to the CPU.

KEY POINT

Multiple processes can run from one program. Three Chrome windows = one program, three independent processes.

Foundation 03

CPU Burst vs I/O Burst

Processes don't use the CPU non-stop. They alternate — and this gap is exactly where scheduling matters.

CPU BURST
  • → The process is actively using the CPU — calculations, logic, data processing
  • → This is when it appears on a Gantt chart
  • → Every process has at least one CPU burst
I/O BURST
  • → Process is waiting for something external — disk, network, keyboard
  • → CPU is completely free during this time
  • → Without scheduling, that free CPU sits idle — wasted
CPU ↔ I/O Alternation — Live Loop
Speed:
While P1 waits for I/O, the CPU runs P2. No wasted cycles. This is the whole point of scheduling.
CPU-BOUND

Long bursts, rare I/O. Examples: video encoding, ML training, scientific simulation.

I/O-BOUND

Short bursts, frequent waits. Examples: web servers, databases, file editors.

Foundation 04

Process State Transition

A process moves through well-defined states. The scheduler controls the most critical transition: Ready to Running.

State Transition Diagram: click any state
🆕
NEW
Process just created
READY
Waiting in queue for CPU
RUNNING
← SCHEDULER DECIDES
💤
WAITING
Blocked on I/O or event
TERMINATED
Execution complete
Phase 1 Complete

Now we understand how processes execute.
You've seen the CPU hardware, how programs become processes,
how CPU and I/O bursts alternate, and the states they move through.

Next: how the OS decides which process runs.

Continue to Scheduling Concepts
Phase 2 · Scheduling Concepts

What is CPU Scheduling?

Now that you understand CPUs, processes, and their states — here is the core problem the OS solves.

The Scheduling Problem: Animated
READY QUEUE
SCHEDULER
picks next
CPU
EXECUTING
DONE

CPU Scheduling is the OS policy that decides
which process in the Ready Queue gets the CPU next — and for how long.

This single decision, made thousands of times per second, determines how fast your system feels, how fairly resources are shared, and whether any process gets starved.

Classification Axis — Preemptive vs Non-Preemptive
NON-PREEMPTIVE

Once a process starts running, it keeps the CPU until it voluntarily gives it up — either by finishing or by requesting I/O. The scheduler cannot interrupt it.

FCFS SJF Priority NP
PREEMPTIVE

The OS can forcibly remove a process from the CPU at any time — when a higher-priority process arrives, or when a time quantum expires.

SRTF Priority P Round Robin
LONG-TERM SCHEDULER

Admits programs into the system. Controls degree of multiprogramming. Runs rarely.

SHORT-TERM SCHEDULER ★

The one we study. Runs every millisecond — picks the next process from the ready queue and dispatches it to the CPU.

MEDIUM-TERM SCHEDULER

Swaps processes out of memory when RAM is full. Reloads them later. Manages degree of multiprogramming dynamically.

Criteria vs Algorithm: A scheduling criterion is what you optimise for (minimise waiting time, maximise throughput). A scheduling algorithm is the mechanism that achieves a criterion. SJF isn't just "an algorithm" — it's the direct response to the criterion of minimising average waiting time.
Scheduling 02

Why is it needed?

Without scheduling, computers as you know them simply wouldn't work.

The "What If No Scheduling?" Scenarios
SCHEDULING GOALS
↑ CPU Utilization — keep CPU busy
↑ Throughput — more jobs per second
↓ Turnaround Time — total time per job
↓ Waiting Time — time in ready queue
↓ Response Time — first CPU access
Good scheduling keeps the CPU busy while processes wait for I/O. This is exactly the concept the burst sections build on.
THE CATCH

No single algorithm nails all goals at once. Each algorithm prioritises a different criterion. Click any goal on the left to understand it and see which algorithm targets it.

Scheduling 03

Key Metrics

Every algorithm is judged by these numbers. Know them cold.

ARRIVAL TIME (AT)

When the process first enters the ready queue.

BURST TIME (BT)

Total CPU time the process needs to finish.

COMPLETION TIME (CT)

The moment the process finishes execution.

TURNAROUND TIME (TAT)
TAT = CT − AT

Total time spent in the system, from arrival to finish.

WAITING TIME (WT)
WT = TAT − BT

Time spent waiting in the ready queue, not executing.

RESPONSE TIME (RT)
RT = First CPU start − AT

Time until the process gets the CPU for the first time.

Worked Example — 3 Processes (FCFS), WT is non-zero

Three processes arrive close together. Because P1 runs first, P2 and P3 are forced to wait — their waiting time is non-zero, caused by the process ahead of them.

GANTT CHART (FCFS)
P1
P2
P3
t=0t=5 (P2 starts)t=8 (P3 starts)t=14 (done)
Process AT BT CT TAT = CT−AT WT = TAT−BT RT = start−AT
P1 0 5 5 5−0 = 5 5−5 = 0 0−0 = 0
P2 0 3 8 8−0 = 8 8−3 = 5 ← waited! 5−0 = 5
P3 0 6 14 14−0 = 14 14−6 = 8 ← waited! 8−0 = 8
4.33
Avg WT
9.0
Avg TAT
0.21
Throughput (proc/unit)

P2 and P3 arrived at t=0 but had to wait because P1 got the CPU first. WT is caused by other processes being ahead. Throughput = 3 ÷ 14 = 0.21 proc/unit.

Phase 2 Complete

Now that we know how scheduling works and what
metrics we use to measure it,

let's explore the algorithms used by real operating systems.

Continue to Algorithms
THINK ABOUT IT FIRST

Suppose four processes arrive:

Which one should the OS run first?

P1
AT = 0
BT = 5
P2
AT = 1
BT = 3
P3
AT = 2
BT = 8
P4
AT = 4
BT = 2
Different scheduling algorithms answer this differently. FCFS picks P1. SJF eventually picks P4. Priority picks whoever has the highest priority. Round Robin gives everyone a turn. Each choice leads to completely different wait times and responsiveness.
Phase 3 · Algorithm 01

First Come, First Served

FCFS — Non-Preemptive · Criterion: Arrival Order

Whoever arrives first, runs first. No interruptions. Processes are executed in the order they enter the ready queue. Pure FIFO.

FCFS Step-Through — with Live Ready Queue
INPUT
Proc AT BT
READY QUEUE (at this moment)
Press Step to begin
CPU NOW
GANTT CHART
Press Step to walk through the algorithm one process at a time. Watch the Ready Queue update.
⚠ Convoy Effect — Interactive Demo

Drag the slider to change P1's burst time. See how one long process blocks everything behind it.

12
ADVANTAGES
  • → Simplest algorithm to implement: just a FIFO queue.
  • → No starvation: every process eventually gets the CPU.
  • → Low overhead: no complex decision-making required.
DISADVANTAGES
  • Convoy Effect: one long process blocks all short ones behind it.
  • → Poor average waiting time, especially with mixed burst lengths.
  • → Not suitable for interactive or time-sensitive systems.
APPLICATIONS
  • → Simple batch processing systems where order matters.
  • → Print spoolers and job queues where fairness = arrival order.
  • → Situations where all jobs have similar burst times.
Algorithm 02

Shortest Job First / SRTF

SJF (Non-Preemptive) · SRTF (Preemptive) · Criterion: Minimum Burst Time
SJF / NON-PREEMPTIVE

When CPU is free, pick the process with the smallest burst time. Once started, runs to completion.

Select min(BT) from ready queue at each dispatch
SRTF / PREEMPTIVE

If a new arrival has a smaller remaining burst time than the current process, preempt it immediately. The moment of preemption is highlighted in the log below.

Preempt if new_arrival.BT < current.remaining_BT
SJF / SRTF — Step-Through with Ready Queue
INPUT
Proc AT BT
READY QUEUE
Press Auto to begin
CPU NOW
GANTT CHART
Toggle SRTF mode and click Auto to see how preemption changes the result. Preemption events will be highlighted.
Starvation — When Short Jobs Keep Arriving

P_long arrived at t=0 with BT=20. Short processes keep arriving. Watch P_long never get scheduled.

Starvation: P_long has been waiting while P2, P3, P4 all ran. If short jobs keep arriving, P_long may never run. Fix: Aging — gradually increase a waiting process's priority over time.
The Burst Prediction Problem: SJF/SRTF require knowing burst times in advance — but in reality, you can't know how long a process will run. Operating systems estimate burst time using exponential averaging of past behavior: τ(n+1) = α·t(n) + (1−α)·τ(n) where t(n) is the actual last burst and τ(n) is the estimate. This is why SJF is theoretically optimal but rarely used in pure form — real schedulers approximate it.
ADVANTAGES
  • → Gives the minimum average waiting time of all non-preemptive algorithms.
  • → SRTF is optimal: no algorithm can beat it for avg WT.
  • → Short processes are served quickly, improving throughput.
DISADVANTAGES
  • Starvation: long processes may never run if short ones keep arriving.
  • → Burst time must be known in advance, which is not always possible.
  • → SRTF has high context switching overhead for frequent preemptions.
APPLICATIONS
  • → Batch systems where job execution times are known upfront.
  • → CPU scheduling in systems that predict burst time from history.
  • → SRTF used in preemptive OS environments needing optimal throughput.
Algorithm 03

Priority Scheduling

Preemptive + Non-Preemptive variants · Criterion: Process Importance
THE IDEA

Each process has a priority number. The CPU always picks the highest-priority process. Convention: lower number = higher priority (P=1 beats P=4).

PREEMPTIVE VS NON-PREEMPTIVE

Non-preemptive: once running, finishes even if higher priority arrives.

Preemptive: a higher-priority arrival immediately kicks out the current process.

SJF is a special case of Priority Scheduling: priority = 1/burst_time, so the shortest job gets the highest priority.
Priority Scheduler — with Ready Queue
INPUT
Proc AT BT Prio
READY QUEUE (sorted by priority)
Press Auto to begin
CPU NOW
GANTT CHART
The ready queue is sorted by priority. Lower number = higher priority = runs first.
ADVANTAGES
  • → Critical tasks are guaranteed to run before less important ones.
  • → Flexible: priority can be assigned based on any criteria.
  • → Widely used in real-time and OS kernel scheduling.
DISADVANTAGES
  • Starvation: low-priority processes may never run if high-priority ones keep arriving.
  • → Fix is aging: gradually raise priority of waiting processes over time.
  • → Deciding the right priority values is non-trivial in practice.
APPLICATIONS
  • → Real-time systems, e.g. medical devices, aircraft control.
  • → OS kernel threads needing to preempt user-space processes.
  • → Network packet scheduling: high-priority traffic first.
Algorithm 04

Round Robin

Preemptive · Criterion: Fairness · Time Quantum = Q

Every process gets a fixed time slice (quantum Q). After Q units, it's preempted and sent to the back of the queue. Designed for fairness: no process waits forever.

Queue Rotation — How Round Robin Works
After P1's quantum expires, it moves to the back of the queue. P2 runs next. The queue rotates continuously.
The Quantum Dilemma: Q too large → behaves like FCFS. Q too small → too many context switches, overhead kills performance. The right Q is the key engineering decision. Try both extremes below.
Interactive Round Robin: Adjust Quantum
2
Process Arrival Time Burst Time
GANTT CHART
Try quantum = 1 vs quantum = 8 on the same processes. Watch context switches and avg wait change.
ADVANTAGES
  • No starvation: every process is guaranteed CPU time within one cycle.
  • → Best response time for interactive systems: users see quick feedback.
  • → Fair by design: all processes are treated equally regardless of burst time.
DISADVANTAGES
  • → Higher average waiting time compared to SJF for the same process set.
  • → Too many context switches if quantum is too small, causing significant overhead.
  • → Performance heavily depends on choosing the right quantum value.
APPLICATIONS
  • → Time-sharing operating systems: the foundation of modern multitasking.
  • → Interactive desktop environments where responsiveness matters most.
  • → Used as a base in multi-level feedback queue schedulers (Linux, Windows).
Going Further

Which algorithm should you use?

There's no universal answer. It depends on what you're optimising for. Use this to figure it out.

Algorithm Selector — Pick your priority
WHAT MATTERS MOST TO YOU?
← Select a criterion to get a recommendation
WHAT WE COVERED
  • FCFS: simple, fair by arrival, suffers convoy effect
  • SJF / SRTF: optimal wait time, but causes starvation
  • Priority: critical tasks first, needs aging to prevent starvation
  • Round Robin: fairest, best for interactive, quantum-dependent
GO DEEPER

Real OS schedulers (Linux CFS, Windows Priority Scheduler) combine these ideas. Search:

"Linux Completely Fair Scheduler"

"Multilevel Feedback Queue"

→ Or run the same process set through all algorithms in the simulator below.

Final Stage

You're ready for the Simulator.

You've built up from the CPU to context switches, states, metrics, and all four algorithms. Now put it all together.

WHAT TO EXPECT IN THE SIMULATOR
  • → Enter any number of processes with AT and BT
  • → Select the algorithm and variant
  • → Get Gantt chart, per-process metrics, and averages
  • → For RR: set your own time quantum
  • → For Priority: assign priority values per process
QUICK REFERENCE
  • TAT = CT − AT
  • WT = TAT − BT
  • RT = First CPU start − AT
  • Throughput = Processes ÷ Total Time
  • SRTF: preempt if new.BT < curr.remaining
  • RR: context switch every Q units
Open the Simulator