Claude Sonnet 4.6 vs GPT-5.2
Two flagship mid-tier models from Anthropic and OpenAI. Here's how they stack up on price, quality, and capabilities.
Claude Sonnet 4.6 offers a larger context window (1M vs 400K) and slightly lower input pricing, making it ideal for document-heavy workflows. GPT-5.2 edges ahead on output cost efficiency and matches on benchmark quality. For most use cases, both are excellent — your choice may come down to ecosystem preference or context needs.
|
Claude Sonnet 4.6
Anthropic
|
GPT-5.2
OpenAI
|
|
|---|---|---|
| Input Price | $3.00/1M | $1.75/1M |
| Output Price | $15.00/1M | $14.00/1M |
| Blended Price | $9.00/1M | $7.88/1M |
| LMSYS Elo | 1385 | 1390 |
| Context Window | 1,000,000 | 400,000 |
| Reasoning | Strong | Strong |
| Code Generation | Excellent | Excellent |
| Provider | Anthropic | OpenAI |
Monthly cost estimate
Adjust the sliders to see how costs compare for your workload.
Pricing breakdown
Claude Sonnet 4.6 is priced at $3.00 per million input tokens and $15.00 per million output tokens, giving a blended rate of $9.00/1M. GPT-5.2 comes in lower on input at $1.75/1M but charges $14.00/1M for output, yielding a blended cost of $7.88/1M. For input-heavy workloads (RAG, document processing), GPT-5.2 offers meaningful savings. For output-heavy tasks like long-form generation, the gap narrows considerably.
Quality & benchmarks
On the LMSYS Chatbot Arena leaderboard, Claude Sonnet 4.6 scores 1385 Elo while GPT-5.2 reaches 1390 — a difference well within the margin of statistical noise. Both models excel at reasoning, instruction following, and code generation. Claude Sonnet is particularly noted for careful, nuanced writing, while GPT-5.2 has a slight edge in structured data tasks and function calling.
Context window comparison
Claude Sonnet 4.6 supports a 1,000,000-token context window — 2.5x larger than GPT-5.2's 400,000-token limit. This makes Claude the clear choice for workflows involving large codebases, legal documents, or multi-document analysis where the full context must be available in a single call. GPT-5.2's 400K window is still generous for most applications.