UABUnbiased AI BenchGlass box for model evals.
Every leaderboard, with receipts.
Home/Models/Claude Haiku 4.5
Claude Haiku 4.5
Live · updated continuously
Models · /models/claude-haiku-4-5

Claude Haiku 4.5

Anthropic · Closed weights · mid · registry tag 2026 fast
textvisiondocumentcode7 aliases6 official receipts
Open compare
Last verified · May 1, 2026
Visible coverage · 15.9%
Verified coverage · 15.9%
Benchmark fit · 49.2%
Benchmark spread · 76.6%
Build / data stamp

Read this before trusting a headline.

Data snapshot May 1, 2026Registry verification passed9 providers · 826 tracked modelsPage refreshed May 7, 2026

Model pages expose the current registry snapshot and page stamp so stale deployments are visible without reading the code.

Score passport by benchmark

Each row keeps the benchmark receipt, source family, raw metric, and percentile inside its exact comparable group.

Thin verified coverageThis model currently reads as thin verified coverage across the resolved evidence surface.
Intelligence Index
AA · Chat / text · Composite
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
69.4% percentile inside its comparable group
31Raw benchmark value
Time to first token
AA · Chat / text · Speed / cost
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
93.3% percentile inside its comparable group
0.77sRaw benchmark value
Text Arena
AR · Chat / text · Human
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
71.5% percentile inside its comparable group
1,388Raw benchmark value
Code Arena
AR · Coding · Human
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
31.7% percentile inside its comparable group
1,317Raw benchmark value
Vision Arena
AR · Vision understanding · Human
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
37.6% percentile inside its comparable group
1,127Raw benchmark value
WebDev Arena
AR · Coding · Human
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
31.7% percentile inside its comparable group
1,317Raw benchmark value
Document Arena
AR · Document understanding · Human
It matters when the job is reading PDFs, tables, forms, or mixed-layout documents rather than plain chat.
16.7% percentile inside its comparable group
1,424Raw benchmark value
Terminal-Bench 2.0
TERMINAL-BENCH · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
30.4% percentile inside its comparable group
29.8%Raw benchmark value
Reasoning
LB · Reasoning / math / science · Objective
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
32.3% percentile inside its comparable group
51%Raw benchmark value
Language
LB · Chat / text · Objective
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
29% percentile inside its comparable group
39.2%Raw benchmark value
Coding
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
54.8% percentile inside its comparable group
51.4%Raw benchmark value
Coding generation
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
48.4% percentile inside its comparable group
48.7%Raw benchmark value
Instruction following
LB · Chat / text · Objective
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
19.4% percentile inside its comparable group
68.8%Raw benchmark value
Coding completion
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
64.5% percentile inside its comparable group
54%Raw benchmark value

Receipts and registry checks

official
Anthropic model overview

May 1, 2026

source →
official
Artificial Analysis

May 1, 2026

source →
official
Arena

May 1, 2026

source →
official
Arena

May 1, 2026

source →
official
Arena

May 1, 2026

source →
official
Arena

May 1, 2026

source →