UABUnbiased AI BenchGlass box for model evals.
Every leaderboard, with receipts.
Home/Benchmarks/Coding
Coding
Live · updated continuously
Benchmarks · /benchmarks/livebench-coding

Coding

LiveBench coding slice with objective scoring and recent questions.
Source · LiveBench
Version · livebench snapshot 2026-05-01
Scores · 32

Passport

Verified but agingThis is an objective signal, so it is mainly about measurable task performance rather than public taste.
source
LiveBench
metric
Score (%)
judge
Objective
direction
higher better
group id
livebench_coding_2026_03
domain
Coding

What it measures vs what it misses

✓ Measures

Objective coding accuracy on recent tasks.

✗ Misses

Subjective style preference. Editing workflow ergonomics.

Why this countsIt tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.Comparable-group ruleThis percentile only compares models inside the exact benchmark/version group shown here. It is not a universal score.What it missesIt does not fully capture repo-scale iteration, IDE ergonomics, or long debugging loops.

Leaderboard · this benchmark version

#1 · Gemini 2.5 Pro
LB · Mar 25, 2025
85.9%
#2 · GPT-4.5 Preview
LB · Feb 27, 2025
75.2%
#3 · Claude Sonnet 3.7
LB · Feb 25, 2025
71.2%
#4 · o3 mini
LB · Feb 6, 2025
70.3%
#5 · DeepSeek Reasoner
LB · Feb 6, 2025
67.4%
#6 · Claude Sonnet 3.5
LB · Feb 6, 2025
64%
#7 · Gemini 2.0 Pro Experimental
LB · Feb 6, 2025
63.5%
#8 · o1
LB · Mar 4, 2025
59.9%
#9 · Gemini Experimental
LB · Feb 6, 2025
55.4%
#10 · Grok 3 mini
LB · Mar 14, 2025
54.2%
#11 · Gemini 2.0 Flash
LB · Feb 6, 2025
53.7%
#12 · GPT-4o
LB · Mar 27, 2025
53.5%
#13 · Grok 3
LB · Mar 14, 2025
52.7%
#14 · Claude Haiku 3.5
LB · Feb 6, 2025
51.4%
#15 · Claude Haiku 4.5
LB · Feb 6, 2025
51.4%
#16 · o1 Preview
LB · Feb 6, 2025
50.9%
#17 · o1 mini
LB · Feb 6, 2025
48.1%
#18 · DeepSeek Chat
LB · Dec 11, 2024
46.2%
#19 · GPT-4 Turbo
LB · Feb 6, 2025
46%
#20 · Gemini 2.0 Flash-Lite
LB · Feb 27, 2025
45.4%
#21 · Grok Beta
LB · Feb 6, 2025
45.2%
#22 · GPT-4o mini
LB · Dec 10, 2024
43.2%
#23 · Grok 2
LB · Feb 6, 2025
41.7%
#24 · Gemini 1.5 Pro
LB · Feb 6, 2025
41.2%
#25 · Gemini 1.5 Flash
LB · Feb 6, 2025
38.9%
#26 · Claude Opus 3
LB · Feb 6, 2025
38.6%
#27 · Grok 2 mini
LB · Feb 6, 2025
37.5%
#28 · GPT-4
LB · Dec 10, 2024
37.3%
#29 · Gemini 1.5 Flash 8B
LB · Feb 6, 2025
28.7%
#30 · GPT-3.5 Turbo
LB · Feb 6, 2025
27.2%
#31 · Claude Sonnet 3
LB · Feb 6, 2025
26.4%
#32 · Claude Haiku 3
LB · Feb 6, 2025
24.5%