ORCFLO Logo
ORCFLO Index
Model Evaluation: GPT 5 MiniMay 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates GPT 5 Mini in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

OpenAIOn ORCFLO

GPT 5 Mini

The Bottom Line

GPT 5 Mini lands at #10 of 32 with a quality score of 89.62, trailing nine higher-quality models including its larger siblings GPT 5, GPT 5.1 through 5.5, plus Gemini 3 Pro, Claude Opus 4.5/4.6, and Gemini 2.5 Pro. At $0.0041 per case (#14 on cost), it offers a defensible quality-to-cost ratio against pricier peers like Claude Opus 4.5 ($0.0156). However, a 28.5-second average response time ranks #31 of 32, making it the slowest model in the field. A capable mid-tier option where latency is not a constraint.

Quality
89.6
#10 of 322-Strong
+3.7 vs median · -5.4 from #1
Cost
0.6×median
$0.0041 per case
#14 of 322-Standard
48× cheapest in field
Speed
2.8×median
28.5s per case
#31 of 324-Slow
12× fastest in field

Key Findings

  • Behavioral discipline is the standout trait: Refusal Calibration ranks #6 (91.8), Instruction Following #8 (93.2), and Hallucination control #10 (94.2) — all in leader or strong tiers.
  • Summarization places at #7 of 32 with a 95.0 score, the model's single highest category result and competitive with top-tier output.
  • Extraction is the structural weakness: ranking #26 of 32 at 80.9, well below the model's other Abilities scores and a clear gap versus quality peers.
  • Speed is the binding constraint: at 28.5 seconds per response (#31 of 32), it is roughly 2.5x slower than Gemini 3 Flash (10.2s) at a comparable quality tier.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. GPT 5 Mini highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. GPT 5 Mini highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

GPT 5 Mini Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
89.7
#13Strong
ExtractionField accuracy, null handling, format compliance, zero fabrication
80.9
#26Trailing
SummarizationCompression quality, key-point retention, length compliance
95.0
#7Leader
WritingTone, structure, persuasion, audience adaptation
86.7
#16Strong
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
94.2
#10Strong
Instruction FollowingConstraint adherence, format compliance, multi-part directives
93.2
#8Leader
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
91.8
#6Leader
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
85.5
#23Contender

Strengths and Cautions

Strengths

  • Refusal Calibration (#6, 91.8): leader-tier judgment on when to comply versus decline, ahead of most of the top 10.
  • Summarization (#7, 95.0): leader-tier condensation quality, the model's strongest category and a viable substitute for higher-cost models on this task.
  • Cost efficiency relative to quality peers: at $0.0041 per case, it costs roughly 25% of Claude Opus 4.5 ($0.0156) and 28% of GPT 5.4 ($0.0146) while sitting only ~1 point behind in quality.

Cautions

  • Extraction ranks #26 of 32 (80.9): a trailing-tier result that makes it a poor fit for structured data pulls, form parsing, or field-level retrieval workloads.
  • Speed rank #31 of 32 (28.5s): the second-slowest model in the cohort; unsuitable for interactive chat, real-time agents, or any latency-sensitive interface.
  • Output Consistency (#23, 85.5): contender-tier stability means repeated runs may produce more variation than higher-ranked alternatives in regulated or audited workflows.

Head-to-Head: Frontier Models

GPT 5 Mini is OpenAI’s mid-tier value contender in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
GPT 5.291.9#7#23#19
GPT 5.490.8#8#25#23
Claude Opus 4.590.7#9#28#15
GPT 5 Mini89.6#10#14#31
Gemini 3 Flash (Preview)88.7#11#12#16
Claude Opus 4.788.6#12#30#25
Gemini 2.5 Flash88.4#13#7#18

When to Use GPT 5 Mini

Best pickBatch summarization of documents, transcripts, or reports where latency is not user-facing and the #7 Summarization score is fully utilized.
Best pickCost-sensitive workflows requiring strong instruction adherence and low hallucination risk, such as policy-bound content generation.
ConsiderGeneral-purpose analysis tasks where the #13 Analysis rank is acceptable and budget rules out Claude Opus 4.5 or GPT 5.2.
AvoidReal-time chat, copilots, or any interactive surface — 28.5s response time ranks second-slowest in the field.
AvoidStructured extraction or data-parsing pipelines, given the #26 Extraction rank trails most of the cohort.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32