Models · /models/claude-opus-3
Claude Opus 3
Anthropic · Closed weights · frontier · registry tag 2024 historical flagship
textcodevisiondocument5 aliases3 official receipts
Build / data stamp
Read this before trusting a headline.
Data snapshot May 1, 2026Registry verification passed9 providers · 826 tracked modelsPage refreshed May 7, 2026
Model pages expose the current registry snapshot and page stamp so stale deployments are visible without reading the code.
Score passport by benchmark
Each row keeps the benchmark receipt, source family, raw metric, and percentile inside its exact comparable group.
Thin verified coverageThis model currently reads as thin verified coverage across the resolved evidence surface.
Intelligence Index
AA · Chat / text · Composite
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
42.1% percentile inside its comparable group
18Raw benchmark value
Text Arena
AR · Chat / text · Human
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
39.2% percentile inside its comparable group
1,262Raw benchmark value
Vision Arena
AR · Vision understanding · Human
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
22.8% percentile inside its comparable group
1,063Raw benchmark value
Reasoning
LB · Reasoning / math / science · Objective
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
67.7% percentile inside its comparable group
57.3%Raw benchmark value
Language
LB · Chat / text · Objective
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
74.2% percentile inside its comparable group
51.8%Raw benchmark value
Coding
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
19.4% percentile inside its comparable group
38.6%Raw benchmark value
Coding generation
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
16.1% percentile inside its comparable group
37.2%Raw benchmark value
Instruction following
LB · Chat / text · Objective
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
12.9% percentile inside its comparable group
65.6%Raw benchmark value
Coding completion
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
22.6% percentile inside its comparable group
40%Raw benchmark value