UABUnbiased AI BenchGlass box for model evals.
Every leaderboard, with receipts.
Home/Compare
Compare
Live · updated continuously
Compare · decision workspace

Compare finalists, not just two rows at a time.

This compare surface is built for shortlist decisions: shared evidence, obvious gaps, tie-heavy areas, and the current leader under a visible decision mode.
Selected models · 3
Shared benchmarks · 2
Current leader · Claude Opus 4.7
Workspace controls

Keep the decision frame explicit.

Mode and preset stay visible up top. Shortlist edits, saves, follows, and links stay available without running the page.

Current frameBest for this use caseEveryday chatbot · Combined public record
Decision mode
Preset

Keep the same finalists, change the weighting, and recompute the decision read without resetting the workspace.

Evidence mode
Shortlist models

Keep the compare set tight. Add one finalist at a time, then prune aggressively.

Workspace utilities

Save, follow, or copy this exact setup only after the shortlist looks right.

Save or share this workspacefollow it, copy it, or reopen it later
Decision read

Claude Opus 4.7 leads this compare set for everyday chatbot.

Read this page as a workspace for choosing between finalists, not as a universal crown. The current frame is Everyday chatbot under best for this use case.

Visible tradeoffsThe current evidence supports a shortlist, not a single winner.
Leader under this modeClaude Opus 4.7
Closest counter-caseGemini 3.1 Pro
Shared benchmark surface2 shared benchmarks · 0 tie-heavy
Evidence modeCombined public record
Why the leader is ahead
Coding · Vision understanding
Where the evidence is thin
0 tie-heavy benchmarks and 22 missing leader rows keep this from reading as a settled public answer.
What to do next
Inspect decisive benchmarks first, then open the disagreement page or pairwise debates if the top line still feels too narrow.
2 of 40 benchmarks
HiL-Bench
SL · %
Code · Coding
27.7%80%
20.3%40%
29.1%100%
60% spread
Search Arena
AR · rating
Search · Search / tool use
1,23392.6%
1,21885.2%
1,23596.3%
11.1% spread
Decisive benchmarks

What is doing the visible work

HiL-Bench
SL · spread 60.0 · %
Search Arena
AR · spread 11.1 · rating
Decision pressure

What changes the winner

Claude Opus 4.7
22 visible benchmark gaps still leave room for the result to move.
Gemini 3.1 Pro
33 visible benchmark gaps still leave room for the result to move.
GPT-5.5
25 visible benchmark gaps still leave room for the result to move.
Coverage gaps

Where evidence is missing

Claude Opus 4.7
Missing visible evidence on this compare surface
22
Gemini 3.1 Pro
Missing visible evidence on this compare surface
33
GPT-5.5
Missing visible evidence on this compare surface
25
Preset interpretation

How this weighting reads the field

The current decision mode is grounded in the Everyday chatbot preset. This keeps the compare page connected to a visible use case instead of an unspoken “overall winner” claim.

score = 1.35 × chat text + 1.00 × reasoning math science + 0.85 × long contextcoverage floor = 60% · recency window = 120 days
Claude Opus 4.7
Anthropic
97.6%
Gemini 3.1 Pro
Google
90.2%
GPT-5.5
OpenAI
81.9%
Go deeper

Open the disagreement surface that answers your next question

Share or publish this compare resultlinks, copy, and advanced framings
Share this artifact

Publish the claim after the evidence, not before it.

Keep the receipts page and card handy, then use the advanced framings only when you actually need them. Share stays available without becoming the page.

Compare artifactClaude Opus 4.7 leads this compare set for everyday chatbot.

Runner-up: Gemini 3.1 Pro · 0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.

Public links

Open or copy the stable surfaces

The receipts page is the canonical evidence surface. The card image is the compact preview for embeds, screenshots, and social cards.

Open evidence pageOpen card preview
Copy-ready text

Use the exact public framing

Each copy action keeps the claim attached to receipts instead of forcing you into a blank composer.

Advanced framings and X composerNeutral, contrarian, open-model, and skeptical variants
Compare artifact

Pick the voice before you post

Use the framing variants only when you need them. The artifact page and the public copy actions above should handle most cases.

Neutral analystLead with the claim, then attach the reason and caveat.Claude Opus 4.7 leads this compare set for everyday chatbot.
ContrarianPush against the easy read and keep the counter-case live.Contrarian take: Claude Opus 4.7 leads this compare set for everyday chatbot.
Open-model angleBias the framing toward the open-weight or transparent-evidence angle.Open-model angle: Compare artifact · Claude Opus 4.7 vs Gemini 3.1 Pro vs GPT-5.5
Don't trust the headlineLead with the caveat before you let the claim travel.Don't trust the headline: Compare artifact · Claude Opus 4.7 vs Gemini 3.1 Pro vs GPT-5.5
X composer

Compose a post that keeps the caveat attached

The post shell always exposes the headline, why, caveat, receipts link, and an optional reply-bait angle.

HeadlineClaude Opus 4.7 leads this compare set for everyday chatbot.
WhySL · spread 60.0 · %
Caveat0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
Receipts link/artifacts/compare?models=claude-opus-4-7,gemini-3-1-pro,gpt-5-5&mode=best-for-this-use-case&preset=everyday-chatbot
Reply-bait angleIf you still back Gemini 3.1 Pro, which benchmark or judge weighting should outrank this surface?
PreviewOver 280
Claude Opus 4.7 leads this compare set for everyday chatbot.
SL · spread 60.0 · %
Caveat: 0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
Receipts: /artifacts/compare?models=claude-opus-4-7,gemini-3-1-pro,gpt-5-5&mode=best-for-this-use-case&preset=everyday-chatbot
Reply bait: If you still back Gemini 3.1 Pro, which benchmark or judge weighting should outrank this surface?
Open in XOpen card preview
Reading guide

How to read this workspace

Who wins most oftenBenchmarks with one clear percentile leader.Missing evidenceBenchmarks where a finalist is absent from the visible surface.Decision readA shortlist claim tied to an explicit preset and mode.