Models · /models/claude-sonnet-4-6
Claude Sonnet 4.6
Anthropic · Closed weights · premium · registry tag 2026 workhorse
textcodevisiondocumentsearch9 aliases10 official receipts
Build / data stamp
Read this before trusting a headline.
Data snapshot May 1, 2026Registry verification passed9 providers · 826 tracked modelsPage refreshed May 7, 2026
Model pages expose the current registry snapshot and page stamp so stale deployments are visible without reading the code.
Score passport by benchmark
Each row keeps the benchmark receipt, source family, raw metric, and percentile inside its exact comparable group.
Thin verified coverageThis model currently reads as thin verified coverage across the resolved evidence surface.
Intelligence Index
AA · Chat / text · Composite
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
90.9% percentile inside its comparable group
43Raw benchmark value
Time to first token
AA · Chat / text · Speed / cost
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
1.7% percentile inside its comparable group
105.91sRaw benchmark value
Text Arena
AR · Chat / text · Human
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
94.3% percentile inside its comparable group
1,448Raw benchmark value
Code Arena
AR · Coding · Human
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
95% percentile inside its comparable group
1,527Raw benchmark value
Vision Arena
AR · Vision understanding · Human
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
94.1% percentile inside its comparable group
1,272Raw benchmark value
WebDev Arena
AR · Coding · Human
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
95% percentile inside its comparable group
1,527Raw benchmark value
Search Arena
AR · Search / tool use · Human
It matters when the model must browse, call tools, and recover useful answers from external systems.
88.9% percentile inside its comparable group
1,221Raw benchmark value
Document Arena
AR · Document understanding · Human
It matters when the job is reading PDFs, tables, forms, or mixed-layout documents rather than plain chat.
88.9% percentile inside its comparable group
1,500Raw benchmark value
VTB
SL · Vision understanding · Rubric
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
0% percentile inside its comparable group
4.5%Raw benchmark value
Humanity's Last Exam
OFF · Reasoning / math / science · Objective
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
14.3% percentile inside its comparable group
33.2%Raw benchmark value
Terminal-Bench 2.0
OFF · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
16.7% percentile inside its comparable group
59.1%Raw benchmark value
SWE-Bench Verified
OFF · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
40% percentile inside its comparable group
79.6%Raw benchmark value
BrowseComp
OFF · Search / tool use · Objective
It matters when the model must browse, call tools, and recover useful answers from external systems.
16.7% percentile inside its comparable group
74%Raw benchmark value
MMMU-Pro
OFF · Vision understanding · Objective
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
20% percentile inside its comparable group
74.5%Raw benchmark value
Multimodal mix
OC · Document understanding · Objective
It matters when the job is reading PDFs, tables, forms, or mixed-layout documents rather than plain chat.
21.4% percentile inside its comparable group
72.7%Raw benchmark value
EnigmaEval
SL · Reasoning / math / science · Rubric
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
13.3% percentile inside its comparable group
58%Raw benchmark value
VISTA
SL · Vision understanding · Rubric
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
21.4% percentile inside its comparable group
75%Raw benchmark value
Debugging
BB · Coding · Rubric
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
90% percentile inside its comparable group
86.6%Raw benchmark value
Security
BB · Coding · Rubric
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
88.9% percentile inside its comparable group
85.3%Raw benchmark value
BS pushback
BB · Professional reasoning · Rubric
Resistance to confidently accepting bogus assumptions in expert-style prompts.
75% percentile inside its comparable group
91.5%Raw benchmark value
Speed throughput
BB · Coding · Speed / cost
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
30% percentile inside its comparable group
95.3 t/sRaw benchmark value
Speed TTFT
BB · Coding · Speed / cost
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
50% percentile inside its comparable group
1207.00msRaw benchmark value