UABUnbiased AI BenchGlass box for model evals.
Every leaderboard, with receipts.
Home/Models/Gemini 3.1 Pro
Gemini 3.1 Pro
Live · updated continuously
Models · /models/gemini-3-1-pro

Gemini 3.1 Pro

Google · Closed weights · frontier · registry tag 2026 flagship
textcodevisiondocumentaudiosearch4 aliases2 official receipts
Open compare
Last verified · May 1, 2026
Visible coverage · 29.5%
Verified coverage · 29.5%
Benchmark fit · 62.6%
Benchmark spread · 100%
Build / data stamp

Read this before trusting a headline.

Data snapshot May 1, 2026Registry verification passed9 providers · 826 tracked modelsPage refreshed May 7, 2026

Model pages expose the current registry snapshot and page stamp so stale deployments are visible without reading the code.

Score passport by benchmark

Each row keeps the benchmark receipt, source family, raw metric, and percentile inside its exact comparable group.

Thin verified coverageThis model currently reads as thin verified coverage across the resolved evidence surface.
Search Arena
AR · Search / tool use · Human
It matters when the model must browse, call tools, and recover useful answers from external systems.
85.2% percentile inside its comparable group
1,218Raw benchmark value
TutorBench
SL · Reasoning / math / science · Rubric
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
20% percentile inside its comparable group
53%Raw benchmark value
VTB
SL · Vision understanding · Rubric
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
90.9% percentile inside its comparable group
29%Raw benchmark value
PRBench Legal
SL · Professional reasoning · Rubric
Applied legal reasoning on professional-domain tasks.
41.7% percentile inside its comparable group
44%Raw benchmark value
HiL-Bench
SL · Coding · Rubric
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
40% percentile inside its comparable group
20.3%Raw benchmark value
MASK
SL · Safety · Rubric
Whether a model stays honest instead of covertly optimizing against the user.
0% percentile inside its comparable group
42.4%Raw benchmark value
MultiNRC
SL · Reasoning / math / science · Rubric
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
100% percentile inside its comparable group
64.7%Raw benchmark value
Humanity's Last Exam
OFF · Reasoning / math / science · Objective
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
85.7% percentile inside its comparable group
44.4%Raw benchmark value
MRCR v2
OFF · Long context · Objective
It checks whether long-context claims survive contact with retrieval, memory, or long-document tasks.
100% percentile inside its comparable group
84.9%Raw benchmark value
Terminal-Bench 2.0
OFF · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
50% percentile inside its comparable group
68.5%Raw benchmark value
SWE-Bench Verified
OFF · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
60% percentile inside its comparable group
80.6%Raw benchmark value
BrowseComp
OFF · Search / tool use · Objective
It matters when the model must browse, call tools, and recover useful answers from external systems.
100% percentile inside its comparable group
85.9%Raw benchmark value
MMMU-Pro
OFF · Vision understanding · Objective
It is useful when the model must read charts, UI, screenshots, or visual scenes rather than text alone.
40% percentile inside its comparable group
80.5%Raw benchmark value

Receipts and registry checks

official
Google Gemini models docs

May 1, 2026

source →
official
Arena

May 1, 2026

source →