Last verified 2025-09-22 (left) · 2025-09-22 (right)
GPT-4.1 vs o4-mini — Pricing & Capability Comparison
GPT-4.1 charges $2.00 per million input tokens and $8.00 per million output tokens. o4-mini comes in at $1.10 / $4.40. Context windows span 128K vs 200K tokens respectively.
Input price (per 1M)
GPT-4.1
$2.00
o4-mini
$1.10
o4-mini leads here
Output price (per 1M)
GPT-4.1
$8.00
o4-mini
$4.40
o4-mini leads here
Context window
GPT-4.1
128,000 tokens
o4-mini
200,000 tokens
o4-mini leads here
Cached input
GPT-4.1
Not published
o4-mini
Not published
No published data
Cost comparison for 10K-token workloads
Side-by-side pricing for identical workloads (10,000 total tokens per request) across different distributions.
Scenario | GPT-4.1 | o4-mini |
---|---|---|
Balanced conversation 50% input · 50% output | $0.0500 | $0.0275 |
Input-heavy workflow 80% input · 20% output | $0.0320 | $0.0176 |
Generation heavy 30% input · 70% output | $0.0620 | $0.0341 |
Cached system prompt 90% cached input · 10% fresh output | $0.0260 | $0.0143 |
Frequently asked questions
Which model is cheaper per million input tokens?
o4-mini costs $1.10 per million input tokens versus $2.00 for GPT-4.1.
How do output prices compare?
o4-mini charges $4.40 per million output tokens, while GPT-4.1 costs $8.00 per million.
Which model supports a larger context window?
o4-mini offers 200,000 tokens (200K) versus 128K for GPT-4.1.