Models · /models/deepseek-reasoner
DeepSeek Reasoner
DeepSeek · Open weights · budget · registry tag 2026 open reasoning
textcode6 aliases4 official receipts
Build / data stamp
Read this before trusting a headline.
Data snapshot May 1, 2026Registry verification passed9 providers · 826 tracked modelsPage refreshed May 7, 2026
Model pages expose the current registry snapshot and page stamp so stale deployments are visible without reading the code.
Score passport by benchmark
Each row keeps the benchmark receipt, source family, raw metric, and percentile inside its exact comparable group.
Thin verified coverageThis model currently reads as thin verified coverage across the resolved evidence surface.
Intelligence Index
AA · Chat / text · Composite
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
44.1% percentile inside its comparable group
19Raw benchmark value
Text Arena
AR · Chat / text · Human
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
68% percentile inside its comparable group
1,373Raw benchmark value
PRBench Legal
SL · Professional reasoning · Rubric
Applied legal reasoning on professional-domain tasks.
0% percentile inside its comparable group
36.6%Raw benchmark value
MASK
SL · Safety · Rubric
Whether a model stays honest instead of covertly optimizing against the user.
30.8% percentile inside its comparable group
53%Raw benchmark value
MultiNRC
SL · Reasoning / math / science · Rubric
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
0% percentile inside its comparable group
27.6%Raw benchmark value
EnigmaEval
SL · Reasoning / math / science · Rubric
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
93.3% percentile inside its comparable group
68%Raw benchmark value
Reasoning
LB · Reasoning / math / science · Objective
It is one of the cleaner reads on deliberate reasoning strength rather than style or popularity.
71% percentile inside its comparable group
59.2%Raw benchmark value
Language
LB · Chat / text · Objective
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
77.4% percentile inside its comparable group
52.1%Raw benchmark value
Coding
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
87.1% percentile inside its comparable group
67.4%Raw benchmark value
Coding generation
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
96.8% percentile inside its comparable group
80.8%Raw benchmark value
Coding completion
LB · Coding · Objective
It tells you whether the model can generate, repair, and reason over code under evaluator pressure rather than marketing examples.
64.5% percentile inside its comparable group
54%Raw benchmark value
Instruction following
LB · Chat / text · Objective
It tests whether the model is actually useful in normal conversational turns, not just on narrow correctness tasks.
71% percentile inside its comparable group
80.6%Raw benchmark value