Last verified 2025-09-22 (left) · 2025-09-22 (right)

GPT-4o mini vs o4-mini — Pricing & Capability Comparison

GPT-4o mini charges $0.15 per million input tokens and $0.60 per million output tokens. o4-mini comes in at $1.10 / $4.40. Context windows span 128K vs 200K tokens respectively.

Input price (per 1M)

GPT-4o mini

$0.15

o4-mini

$1.10

GPT-4o mini leads here

Output price (per 1M)

GPT-4o mini

$0.60

o4-mini

$4.40

GPT-4o mini leads here

Context window

GPT-4o mini

128,000 tokens

o4-mini

200,000 tokens

o4-mini leads here

Cached input

GPT-4o mini

Not published

o4-mini

Not published

No published data

Cost comparison for 10K-token workloads

Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.

ScenarioGPT-4o minio4-mini
Balanced conversation
50% input · 50% output
$0.0037$0.0275
Input-heavy workflow
80% input · 20% output
$0.0024$0.0176
Generation heavy
30% input · 70% output
$0.0046$0.0341
Cached system prompt
90% cached input · 10% fresh output
$0.0019$0.0143

Frequently asked questions

Which model is cheaper per million input tokens?

GPT-4o mini costs $0.15 per million input tokens versus $1.10 for o4-mini.

How do output prices compare?

GPT-4o mini charges $0.60 per million output tokens, while o4-mini costs $4.40 per million.

Which model supports a larger context window?

o4-mini offers 200,000 tokens (200K) versus 128K for GPT-4o mini.

Related resources