UABUnbiased AI BenchGlass box for model evals.
Every leaderboard, with receipts.
Home/Versus/Llama 4 Scout vs alpaca-13b
Llama 4 Scout vs alpaca-13b
Live · updated continuously
Model vs model

Llama 4 Scout vs alpaca-13b

A debate-ready pair page: current winner, counter-case, decisive benchmarks, and the caveat that should travel with the claim.
Use case · Everyday chatbot
Winner · Llama 4 Scout
Evidence mode · Combined public record

Llama 4 Scout leads this compare set for everyday chatbot.

Thin verified coverage0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
Left caseLlama 4 Scout wins 0 visible benchmarks · Chat / text
Right casealpaca-13b wins 0 visible benchmarks · Chat / text
Traveling caveat0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
Debate surface0 shared benchmarks still read as tie-heavy.

Llama 4 Scout case

  • Chat / text

alpaca-13b case

  • Chat / text

What changes the outcome

  • Llama 4 Scout: 38 visible benchmark gaps still leave room for the result to move.
  • alpaca-13b: 39 visible benchmark gaps still leave room for the result to move.

Why this result is surprising

  • The visible shared surface is more decisive than usual for this compare set.
  • Very few shared benchmarks are decisively separating these models.

Why this is not a clean win

  • 0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
  • alpaca-13b remains the nearest counter-case once you change preset, mode, or missing-coverage assumptions.
Share this artifact

Publish the claim after the evidence, not before it.

Keep the receipts page and card handy, then use the advanced framings only when you actually need them. Share stays available without becoming the page.

Compare artifactLlama 4 Scout leads this compare set for everyday chatbot.

Runner-up: alpaca-13b · 0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.

Public links

Open or copy the stable surfaces

The receipts page is the canonical evidence surface. The card image is the compact preview for embeds, screenshots, and social cards.

Open evidence pageOpen card preview
Copy-ready text

Use the exact public framing

Each copy action keeps the claim attached to receipts instead of forcing you into a blank composer.

Advanced framings and X composerNeutral, contrarian, open-model, and skeptical variants
Compare artifact

Pick the voice before you post

Use the framing variants only when you need them. The artifact page and the public copy actions above should handle most cases.

Neutral analystLead with the claim, then attach the reason and caveat.Llama 4 Scout leads this compare set for everyday chatbot.
ContrarianPush against the easy read and keep the counter-case live.Contrarian take: Llama 4 Scout leads this compare set for everyday chatbot.
Open-model angleBias the framing toward the open-weight or transparent-evidence angle.Open-model angle: Compare artifact · Llama 4 Scout vs alpaca-13b
Don't trust the headlineLead with the caveat before you let the claim travel.Don't trust the headline: Compare artifact · Llama 4 Scout vs alpaca-13b
X composer

Compose a post that keeps the caveat attached

The post shell always exposes the headline, why, caveat, receipts link, and an optional reply-bait angle.

HeadlineLlama 4 Scout leads this compare set for everyday chatbot.
WhyThe visible evidence surface moved in a way that changes the headline.
Caveat0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
Receipts link/versus/llama-4-scout/alpaca-13b?preset=everyday-chatbot&mode=best-for-this-use-case
Reply-bait angleIf you still back alpaca-13b, which benchmark or judge weighting should outrank this surface?
PreviewOver 280
Llama 4 Scout leads this compare set for everyday chatbot.
The visible evidence surface moved in a way that changes the headline.
Caveat: 0 shared benchmarks are still tie-heavy, so the win stays conditional. This compare uses the combined public record, with hybrid receipts labeled separately.
Receipts: /versus/llama-4-scout/alpaca-13b?preset=everyday-chatbot&mode=best-for-this-use-case
Reply bait: If you still back alpaca-13b, which benchmark or judge weighting should outrank this surface?
Open in XOpen card preview

Decisive benchmarks

0 of 40 benchmarks
No benchmarks match the current compare filters.