Verified 2025-09-22 · sourced from OpenAI
GPT-4o mini Token Calculator & Cost Guide
Estimate OpenAI GPT-4o mini API usage in dollars before you send a single request. Standard pricing is $0.15 per million input tokens and $0.60 per million output tokens with a 128K token context window.
Context window
128,000 tokens
Input price
$0.15 / 1M
Output price
$0.60 / 1M
Cached input
Not published
Usage scenarios
Compare standard and cached pricing (where available) across common workloads.
Scenario | Tokens in | Tokens out | Total tokens | Standard cost |
---|---|---|---|---|
Quick chat reply Single user question with a short assistant answer | 650 | 220 | 870 | $0.0002 |
Coding assistant session Multi-turn pair programming exchange (≈6 turns) | 2,600 | 1,400 | 4,000 | $0.0012 |
Knowledge base response Retrieval-augmented answer referencing multiple passages | 12,000 | 3,000 | 15,000 | $0.0036 |
Near-max context run Large document processing approaching the 128K token limit | 112,000 | 16,000 | 128,000 | $0.0264 |
Daily & monthly budgeting
Translate usage into predictable operating expenses across popular deployment sizes.
Profile | Requests/day | Tokens/day | Daily cost | Monthly cost |
---|---|---|---|---|
Team pilot | 25 | 75,000 | $0.0225 | $0.675 |
Product launch | 100 | 500,000 | $0.142 | $4.27 |
Enterprise scale | 500 | 3,000,000 | $0.900 | $27.00 |
Pricing notes
- Cost-efficient multimodal tier ideal for realtime chat and lightweight agents.
Frequently asked questions
How much does GPT-4o mini cost per 1,000 tokens?
At the published rates of $0.15 per million input tokens and $0.60 per million output tokens, a typical 1,000 token request (≈70% input, 30% output) costs about $0.0003.
What is the context window for GPT-4o mini?
GPT-4o mini supports up to 128,000 tokens (128K), allowing large prompts and retrieval-augmented payloads in a single call.
How fresh is the GPT-4o mini pricing data?
Pricing is sourced from https://platform.openai.com/docs/pricing and was last verified on 2025-09-22. The calculator updates automatically when models.json is refreshed.