ORCFLO Logo
ORCFLO Index
Model Evaluation: GPT 5.1May 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates GPT 5.1 in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

OpenAI

GPT 5.1

The Bottom Line

GPT 5.1 lands at #5 of 32 with a quality score of 92.69, trailing GPT 5 (95.04), Gemini 3 Pro (93.17), GPT 5.5 (93.02), and Claude Opus 4.6 (92.78). Its distinguishing feature within this top tier is speed: at 7.6 seconds it is markedly faster than every model ranked above it, all of which exceed 18 seconds. The result is the fastest model in the top-five quality bracket, positioned for latency-sensitive deployments.

Quality
92.7
#5 of 321-Leader
+6.8 vs median · -2.3 from #1
Cost
1.5×median
$0.0100 per case
#22 of 323-Premium
116× cheapest in field
Speed
0.7×median
7.6s per case
#13 of 322-Quick
3.2× fastest in field

Key Findings

  • Top-5 quality with top-tier speed: 92.69 overall (#5) delivered in 7.6 seconds (#13), versus 18.9–22.8 seconds for the four higher-ranked models.
  • Leader-tier on three abilities: Extraction 92.7 (#3), Summarization 95.7 (#3), and Writing 92.9 (#5) place it among the top output-generation models in the cohort.
  • Refusal Calibration at 94.8 (#3): handles sensitive or ambiguous prompts with appropriate judgment more reliably than 29 of 31 peers.
  • Instruction Following lags at 87.0 (#23): a clear weak spot inconsistent with the rest of the profile and a risk for rigidly templated workflows.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. GPT 5.1 highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. GPT 5.1 highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

GPT 5.1 Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
92.0
#10Strong
ExtractionField accuracy, null handling, format compliance, zero fabrication
92.7
#3Leader
SummarizationCompression quality, key-point retention, length compliance
95.7
#3Leader
WritingTone, structure, persuasion, audience adaptation
92.9
#5Leader
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
94.8
#9Strong
Instruction FollowingConstraint adherence, format compliance, multi-part directives
87.0
#23Contender
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
94.8
#3Leader
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
91.7
#14Strong

Strengths and Cautions

Strengths

  • Summarization leadership (95.7, #3): outperformed only by two models in the entire 32-model field, suitable for high-volume condensation workloads.
  • Extraction precision (92.7, #3): top-tier reliability for structured data pull from documents, forms, and transcripts.
  • Speed at the top tier (7.6s, #13): the fastest model in the top-five quality bracket by a wide margin — peers at #2–#4 all run 18.9s or longer.

Cautions

  • Instruction Following at #23 of 32 (87.0): the weakest dimension by rank; workflows with strict formatting, field-order, or step-sequence requirements may see drift.
  • Cost rank #22 ($0.0100/case): more than twice as expensive as some near-peers and not competitive on unit economics for high-volume use.
  • Analysis at 92.0 (#10): solid but the lowest-ranked of its ability categories, trailing nine other models on reasoning-heavy tasks.

Head-to-Head: Frontier Models

GPT 5.1 is OpenAI’s fast top-tier option in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
Gemini 3 Pro (Preview)93.2#2#18#28
GPT 5.593.0#3#32#29
Claude Opus 4.692.8#4#29#26
GPT 5.192.7#5#22#13
Gemini 2.5 Pro92.0#6#19#27
GPT 5.291.9#7#23#19
GPT 5.490.8#8#25#23

When to Use GPT 5.1

Best pickLatency-sensitive summarization and extraction pipelines where 18+ second responses from higher-ranked peers are unacceptable.
Best pickCustomer-facing content generation requiring both quality writing (#5) and well-calibrated refusal behavior (#3).
ConsiderGeneral-purpose analyst workflows — strong overall, but Gemini 3 Pro (#2) or Claude Opus 4.6 (#4) may serve better if latency is not critical.
AvoidRigidly templated outputs with strict formatting rules, given Instruction Following ranked #23 of 32.
AvoidHigh-volume, cost-sensitive batch processing where the #22 cost rank materially compounds against cheaper peers like Gemini 2.5 Pro.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32