ORCFLO Logo
ORCFLO Index
Model Evaluation: Mistral Small 3 (24B)May 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates Mistral Small 3 (24B) in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

MistralOn ORCFLO

Mistral Small 3 (24B)

The Bottom Line

Mistral Small 3 (24B) ranks #26 of 32 on overall quality (79.89), placing it in the lower-middle of the field well behind leaders like GPT 5 (95.04) and Gemini 3 Pro (93.17). What distinguishes it is economics: at $0.0001 per case it is the cheapest model in the entire cohort (#1 of 32) and the 7th fastest at 4.2 seconds. The takeaway: a budget workhorse for high-volume, low-stakes tasks where unit cost dominates the decision.

Quality
79.9
#26 of 324-Trailing
-6.0 vs median · -15.1 from #1
Cost
0.0×median
$0.0001 per case
#1 of 321-Budget
cheapest in field
Speed
0.4×median
4.2s per case
#7 of 321-Fast
1.7× fastest in field

Key Findings

  • Cheapest model in the cohort at $0.0001 per case (#1 of 32), roughly 50x cheaper than o3-mini and an order of magnitude below most quality peers.
  • Quality lands at #26 of 32 (79.89), trailing all OpenAI, Anthropic, and Google flagship models by 10–15 points.
  • Summarization is the relative bright spot at 89.2 (#17), while Analysis lags at 73.7 (#26) and Hallucination control is weak at 77.5 (#29).
  • Speed is competitive at 4.2s (#7 of 32), faster than most peers in its quality band.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. Mistral Small 3 (24B) highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. Mistral Small 3 (24B) highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

Mistral Small 3 (24B) Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
73.7
#26Trailing
ExtractionField accuracy, null handling, format compliance, zero fabrication
83.1
#18Contender
SummarizationCompression quality, key-point retention, length compliance
89.2
#17Contender
WritingTone, structure, persuasion, audience adaptation
83.0
#18Contender
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
77.5
#29Trailing
Instruction FollowingConstraint adherence, format compliance, multi-part directives
82.2
#27Trailing
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
65.8
#24Contender
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
84.6
#24Contender

Strengths and Cautions

Strengths

  • Unbeatable unit economics — the #1 ranked cost ($0.0001/case) makes it viable for workloads where millions of calls would make pricier models prohibitive.
  • Solid summarization performance at 89.2 (#17 of 32), its strongest category and competitive with several mid-tier models.
  • Fast response times at 4.2 seconds (#7 of 32) support latency-sensitive throughput scenarios.

Cautions

  • Hallucination control ranks #29 of 32 at 77.5, a material risk for any factual or compliance-sensitive workflow.
  • Instruction Following at #27 (82.2) and Analysis at #26 (73.7) indicate the model struggles with structured, multi-step, or reasoning-heavy tasks.
  • Output Consistency at #24 (84.6) means repeat-run stability is below average, complicating use in automated pipelines that require predictability.

Head-to-Head: Frontier Models

Mistral Small 3 (24B) is Mistral’s budget value pick in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
Gemini 2.0 Flash83.1#23#5#4
Gemini 2.5 Flash-Lite80.9#24#4#5
GPT 4.1 Mini80.4#25#9#9
Mistral Small 3 (24B)79.9#26#1#7
Gemini 2.0 Flash-Lite78.0#27#3#2
o3-mini77.4#28#27#20
GPT 4o75.4#29#16#6

When to Use Mistral Small 3 (24B)

Best pickHigh-volume document summarization where per-call cost is the binding constraint.
Best pickInternal-tool prototyping and exploratory batch jobs where budget caps matter more than peak quality.
ConsiderLightweight extraction tasks (83.1, #18) — acceptable when outputs are human-reviewed downstream.
AvoidFactual research, regulated content, or any workflow where hallucination risk (#29) has business consequences.
AvoidComplex analytical reasoning or multi-step instruction chains where its #26 Analysis and #27 Instruction Following scores will surface as failures.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32