ORCFLO Logo
ORCFLO Index
Model Evaluation: GPT 5 NanoMay 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates GPT 5 Nano in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

OpenAIOn ORCFLO

GPT 5 Nano

The Bottom Line

GPT 5 Nano lands at #16 of 32 overall with a quality score of 85.93, placing it in the middle of the field well behind frontier siblings like GPT 5 (#1) and GPT 5 Mini (#10). Its appeal is economic: at $0.0014 per case it ranks #10 on cost, making it one of the cheaper OpenAI options. However, a 26.2-second average response time (#30 of 32) is a serious operational drawback. A budget workhorse that pays for low cost with slow throughput.

Quality
85.9
#16 of 322-Strong
+0.0 vs median · -9.1 from #1
Cost
0.2×median
$0.0014 per case
#10 of 322-Standard
17× cheapest in field
Speed
2.5×median
26.2s per case
#30 of 324-Slow
11× fastest in field

Key Findings

  • Output Consistency is the standout strength at 94.8 (#8 of 32, leader tier), the only category where the model breaks into the top ten.
  • Cost-per-case of $0.0014 (#10) undercuts most quality-tier peers — Claude Sonnet 4.6 ($0.0118) and o3 ($0.0153) cost roughly 8–11× more for similar overall quality.
  • Speed is the structural weakness: 26.2s average response time ranks #30, slower than every quality-neighbor peer, including o4-mini at 7.2s and GPT 4.1 at 5.2s.
  • Extraction performance trails badly at 80.5 (#27 of 32), making the model a poor fit for structured data tasks despite strong summarization scores.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. GPT 5 Nano highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. GPT 5 Nano highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

GPT 5 Nano Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
82.6
#18Contender
ExtractionField accuracy, null handling, format compliance, zero fabrication
80.5
#27Trailing
SummarizationCompression quality, key-point retention, length compliance
92.5
#13Strong
WritingTone, structure, persuasion, audience adaptation
75.7
#24Contender
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
87.3
#23Contender
Instruction FollowingConstraint adherence, format compliance, multi-part directives
89.9
#16Strong
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
84.1
#14Strong
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
94.8
#8Leader

Strengths and Cautions

Strengths

  • Output Consistency at 94.8 (#8) — the model produces highly repeatable results across runs, valuable for workflows requiring predictable behavior.
  • Summarization at 92.5 (#13, strong tier) — the strongest of the ability categories, suitable for condensing reports, transcripts, and long-form content.
  • Refusal Calibration at 84.1 (#14) and Instruction Following at 89.9 (#16) — behavior controls land in the strong tier, indicating reliable compliance with prompt directives.

Cautions

  • Latency is unacceptable for interactive use: 26.2s puts it at #30 of 32, while peers o4-mini (7.2s) and GPT 4.1 (5.2s) deliver comparable quality at a fraction of the wait.
  • Extraction at 80.5 (#27) and Writing at 75.7 (#24) are clear weak spots — both fall into trailing or low-contender territory.
  • Hallucination score of 87.3 (#23) is mediocre, suggesting factual reliability lags the field for high-stakes content.

Head-to-Head: Frontier Models

GPT 5 Nano is OpenAI’s mid-tier budget option in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
Gemini 2.5 Flash88.4#13#7#18
Claude Sonnet 4.688.2#14#24#24
o386.2#15#26#22
GPT 5 Nano85.9#16#10#30
o4-mini85.9#17#21#12
Claude Sonnet 485.5#18#17#14
GPT 4.185.0#19#15#10

When to Use GPT 5 Nano

Best pickHigh-volume batch summarization where cost per case matters more than turnaround time.
Best pickBackground or asynchronous pipelines that benefit from strong output consistency (94.8) and tolerate multi-second latency.
ConsiderGeneral-purpose analysis tasks if budget is the binding constraint — but o4-mini or Gemini 2.5 Flash offer similar quality with far better speed.
AvoidReal-time chat, agent loops, or any user-facing application where 26-second response times will degrade experience.
AvoidStructured data extraction or precision writing tasks, where the model ranks #27 and #24 respectively.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32