ORCFLO Logo
ORCFLO Index
Model Evaluation: o3May 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates o3 in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

OpenAIOn ORCFLO

o3

The Bottom Line

o3 lands at #15 of 32 with a quality score of 86.15, placing it in the middle of the field well behind frontier leaders GPT 5 (95.04), Gemini 3 Pro (93.17), and Claude Opus 4.6 (92.78). It delivers leader-tier summarization but is dragged down by weak extraction and output consistency scores, while ranking #26 on cost and #22 on speed. A specialist tool with notable gaps, not a generalist workhorse.

Quality
86.2
#15 of 322-Strong
+0.2 vs median · -8.9 from #1
Cost
2.3×median
$0.0153 per case
#26 of 324-Highest
178× cheapest in field
Speed
1.3×median
13.5s per case
#22 of 323-Moderate
5.6× fastest in field

Key Findings

  • Summarization is the standout capability at 95.2 (#6 of 32), placing o3 in the leader tier alongside top frontier models for condensing and synthesizing content.
  • Extraction is a serious weakness at 77.7 (#30 of 32), trailing nearly the entire field and undermining use in structured data workflows.
  • Output Consistency ranks #30 of 32 at 77.0, meaning repeat runs of the same prompt can produce materially different results.
  • Cost-performance is unfavorable: at $0.0153 per case (#26) o3 is more expensive than 25 other models, including peers like Gemini 2.5 Flash ($0.0004) that score higher overall.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. o3 highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. o3 highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

o3 Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
87.5
#15Strong
ExtractionField accuracy, null handling, format compliance, zero fabrication
77.7
#30Trailing
SummarizationCompression quality, key-point retention, length compliance
95.2
#6Leader
WritingTone, structure, persuasion, audience adaptation
90.0
#12Strong
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
93.4
#12Strong
Instruction FollowingConstraint adherence, format compliance, multi-part directives
92.9
#11Strong
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
75.6
#19Contender
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
77.0
#30Trailing

Strengths and Cautions

Strengths

  • Summarization leadership: 95.2 score (#6 of 32, leader tier) makes it credible for executive briefings, document condensation, and meeting synthesis.
  • Reliable factual grounding: Hallucination score of 93.4 (#12) and Instruction Following at 92.9 (#11) place it in the strong tier for both behaviors.
  • Solid long-form writing: 90.0 (#12 of 32) supports drafting tasks where prose quality matters more than structured output.

Cautions

  • Avoid for structured extraction work: 77.7 score at rank #30 of 32 means it underperforms on parsing fields, tables, and entities from documents.
  • Inconsistent outputs across runs: 77.0 on Output Consistency (#30) is a risk for automated pipelines that depend on stable, repeatable results.
  • Premium price without premium quality: at #26 on cost and #15 on quality, buyers can find better value at Gemini 2.5 Flash (#13 quality, #7 cost) or o4-mini (#17 quality, #21 cost).

Head-to-Head: Frontier Models

o3 is OpenAI’s mid-tier specialist in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
Claude Opus 4.788.6#12#30#25
Gemini 2.5 Flash88.4#13#7#18
Claude Sonnet 4.688.2#14#24#24
o386.2#15#26#22
GPT 5 Nano85.9#16#10#30
o4-mini85.9#17#21#12
Claude Sonnet 485.5#18#17#14

When to Use o3

Best pickExecutive summarization of long-form reports, transcripts, and research documents where #6-ranked synthesis quality matters.
Best pickDrafting written analysis and narrative content where the combination of 90.0 writing and 93.4 hallucination control adds value.
ConsiderMixed analytical workflows where summarization is the primary task and extraction is handled by a different model.
AvoidHigh-volume structured data extraction — rank #30 of 32 in this category makes it unfit for invoice parsing, form processing, or entity extraction at scale.
AvoidAutomated production pipelines requiring repeatable outputs — Output Consistency at #30 of 32 creates downstream reliability risk.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32