ORCFLO Logo
ORCFLO Index
Model Evaluation: o3-miniMay 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates o3-mini in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

OpenAIOn ORCFLO

o3-mini

The Bottom Line

o3-mini lands at #28 of 32 in overall quality (77.4), placing it in the trailing tier well behind frontier options like GPT 5 (95.0), Gemini 3 Pro (93.2), and Claude Opus 4.6 (92.8). Compounding the quality gap, it ranks #27 on cost ($0.0154/case) and #20 on speed (12.1s) — making it both slower and more expensive than nearly every model that outscores it. The takeaway: o3-mini is hard to justify in 2025 procurement when cheaper, faster, higher-quality peers are abundant.

Quality
77.4
#28 of 324-Trailing
-8.5 vs median · -17.6 from #1
Cost
2.3×median
$0.0154 per case
#27 of 324-Highest
179× cheapest in field
Speed
1.2×median
12.1s per case
#20 of 323-Moderate
5.0× fastest in field

Key Findings

  • Trailing-tier overall quality at #28 of 32 (77.4), lagging frontier leaders by 15+ points and underperforming even mid-tier peers like GPT 4.1 Mini (80.4, #25) at a fraction of the cost.
  • Cost-quality mismatch is severe: at $0.0154 per case (#27), o3-mini is roughly 15x more expensive than Mistral Small 3 ($0.0001) and GPT 4.1 Nano ($0.0002), both of which deliver comparable or better quality.
  • Writing (#30) and Analysis (#28) are the weakest abilities, scoring 69.9 and 66.8 respectively — placing it near the bottom of the field on the two categories most business users prioritize.
  • Summarization (88.6, #18) and Extraction (82.9, #20) are the only contender-tier showings, suggesting narrow viability for structured text-processing tasks.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. o3-mini highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. o3-mini highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

o3-mini Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
66.8
#28Trailing
ExtractionField accuracy, null handling, format compliance, zero fabrication
82.9
#20Contender
SummarizationCompression quality, key-point retention, length compliance
88.6
#18Contender
WritingTone, structure, persuasion, audience adaptation
69.9
#30Trailing
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
79.0
#28Trailing
Instruction FollowingConstraint adherence, format compliance, multi-part directives
86.2
#24Contender
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
65.8
#25Trailing
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
80.0
#28Trailing

Strengths and Cautions

Strengths

  • Summarization at 88.6 (#18 of 32) is the model's strongest category, landing in the contender tier and usable for document condensation workflows.
  • Instruction Following at 86.2 (#24) remains serviceable, indicating the model generally adheres to prompt structure even when output quality lags.
  • Extraction at 82.9 (#20) supports basic structured data pulls from text, making it a defensible — if not preferred — choice for routine parsing.

Cautions

  • Writing rank #30 of 32 (69.9) disqualifies o3-mini for any customer-facing copy, long-form drafting, or executive communication tasks.
  • Analysis rank #28 (66.8) and Hallucination rank #28 (79.0) together signal weak reasoning combined with elevated factual risk — a poor combination for decision-support workloads.
  • Cost ($0.0154, #27) and latency (12.1s, #20) place o3-mini in the expensive-and-slow quadrant; peers at similar quality (Mistral Small 3, Gemini 2.0 Flash-Lite) run 50-150x cheaper and 3x faster.

Head-to-Head: Frontier Models

o3-mini is OpenAI’s trailing-tier peer comparison in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
GPT 4.1 Mini80.4#25#9#9
Mistral Small 3 (24B)79.9#26#1#7
Gemini 2.0 Flash-Lite78.0#27#3#2
o3-mini77.4#28#27#20
GPT 4o75.4#29#16#6
GPT 4o Mini72.8#30#6#8
GPT 4.1 Nano71.3#31#2#3

When to Use o3-mini

Best pickInternal summarization of meeting notes or reports where its #18 Summarization rank (88.6) is adequate and latency is not critical.
Best pickStructured extraction tasks (entities, fields, tables) leveraging the #20 Extraction score of 82.9.
ConsiderOrganizations already standardized on OpenAI tooling who need a reasoning-tier fallback — though GPT 5 Mini (#10, 89.6) is a stronger choice in the same family.
AvoidAny writing-intensive workload: marketing copy, drafting, or narrative reports, given the #30 rank in Writing (69.9).
AvoidCost-sensitive high-volume deployments — Mistral Small 3 and Gemini 2.0 Flash-Lite deliver comparable or better quality at 1-2% of the price.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32