OpenAI vs DeepSeek Pricing
Per-million-token pricing for OpenAI and DeepSeek, with side-by-side flagship models, cheapest tiers, and context windows. Pricing data syncs weekly from the open-source litellm catalog — last updated May 4, 2026.
Who wins on what
Cheapest input tokens
$0.02/1MDeepSeek
DeepSeek R1 0528 Qwen3 8B — $0.02/1M input
Cheapest output tokens
$0.10/1MDeepSeek
DeepSeek R1 0528 Qwen3 8B — $0.10/1M output
Longest context window
2.0MOpenAI
gpt-5.4 (>272K context length) — 2.0M input tokens
Lowest average output cost
$0.83/1MDeepSeek
Provider-wide average across 19 models
Largest model catalog
153 modelsOpenAI
More options to match cost vs capability
Side-by-side
OpenAI
Cheapest input
$0.030
gpt-oss-20b
Cheapest output
$0.140
gpt-oss-20b
Longest context
2.0M
gpt-5.4 (>272K context length)
Avg output / 1M
$23.87
Across catalog
| Model | In/1M | Out/1M | Ctx |
|---|---|---|---|
| o1-pro | $150.00 | $600.00 | 200K |
| gpt-5.4-pro (>272K context length) | $60.00 | $270.00 | 2.0M |
| gpt-5.5-pro (>272K context length) | $60.00 | $270.00 | 2.0M |
| gpt-5.4-pro (<272K context length) | $30.00 | $180.00 | 272K |
| gpt-5.5-pro (<272K context length) | $30.00 | $180.00 | 272K |
| gpt-oss-20b | $0.030 | $0.140 | 131K |
DeepSeek
Cheapest input
$0.020
DeepSeek R1 0528 Qwen3 8B
Cheapest output
$0.100
DeepSeek R1 0528 Qwen3 8B
Longest context
164K
DeepSeek Prover V2
Avg output / 1M
$0.828
Across catalog
| Model | In/1M | Out/1M | Ctx |
|---|---|---|---|
| DeepSeek R1 | $0.550 | $2.19 | 66K |
| DeepSeek Prover V2 | $0.500 | $2.18 | 164K |
| R1 0528 | $0.400 | $1.75 | 164K |
| DeepSeek V3 | $0.300 | $1.20 | 164K |
| R1 | $0.300 | $1.20 | 164K |
| DeepSeek R1 0528 Qwen3 8B | $0.020 | $0.100 | 33K |
All prices in USD per 1 million tokens. Showing top 6 models per provider, sorted by output cost.
Run the numbers for your workload
Calcaas multiplies per-token costs by your real usage patterns — inputs, outputs, retries, and conversation history — across both providers in one model.