UABUnbiased AI BenchGlass box for model evals.
Every leaderboard, with receipts.
Home/Models/GPT-OSS 120B
GPT-OSS 120B
Live · updated continuously
Models · /models/gpt-oss-120b

GPT-OSS 120B

OpenAI · Open weights · mid · registry tag 2026 open reasoning
textcode4 aliases3 official receipts
Open compare
Last verified · May 1, 2026
Visible coverage · 0%
Verified coverage · 0%
Benchmark fit · n/a
Benchmark spread · n/a
Build / data stamp

Read this before trusting a headline.

Data snapshot May 1, 2026Registry verification passed9 providers · 826 tracked modelsPage refreshed May 7, 2026

Model pages expose the current registry snapshot and page stamp so stale deployments are visible without reading the code.

Score passport by benchmark

Each row keeps the benchmark receipt, source family, raw metric, and percentile inside its exact comparable group.

Thin verified coverageThis model currently reads as thin verified coverage across the resolved evidence surface.
Intelligence Index
AA · Chat / text · Composite
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
54.9% percentile inside its comparable group
24Raw benchmark value
Time to first token
AA · Chat / text · Speed / cost
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
87.9% percentile inside its comparable group
0.90sRaw benchmark value
Text Arena
AR · Chat / text · Human
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
65.2% percentile inside its comparable group
1,365Raw benchmark value
PRBench Legal
SL · Professional reasoning · Rubric
Applied legal reasoning on professional-domain tasks.
8.3% percentile inside its comparable group
40.2%Raw benchmark value
MASK
SL · Safety · Rubric
Whether a model stays honest instead of covertly optimizing against the user.
92.3% percentile inside its comparable group
92%Raw benchmark value
Terminal-Bench 2.0
TERMINAL-BENCH · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
13% percentile inside its comparable group
18.7%Raw benchmark value

Receipts and registry checks

official
Artificial Analysis

May 1, 2026

source →
official
Artificial Analysis

May 1, 2026

source →
official
Arena

May 1, 2026

source →