ORCFLO Logo
ORCFLO Index
Model Evaluation: GPT 5.2May 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates GPT 5.2 in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

OpenAIOn ORCFLO

GPT 5.2

The Bottom Line

GPT 5.2 lands at #7 of 32 with a quality score of 91.89, placing it just behind Gemini 2.5 Pro (#6) and GPT 5.1 (#5), and well behind sibling GPT 5 (#1, 95.04). It earns the #1 rank in Output Consistency (99.6) and leader-tier marks across Analysis, Writing, Instruction Following, and Refusal Calibration. However, Extraction (#21) is a clear soft spot, and cost rank #23 makes it pricier than several higher-scoring peers. A dependable generalist, but not the sharpest value pick in OpenAI's own lineup.

Quality
91.9
#7 of 321-Leader
+6.0 vs median · -3.1 from #1
Cost
1.6×median
$0.0105 per case
#23 of 323-Premium
122× cheapest in field
Speed
1.1×median
10.9s per case
#19 of 323-Moderate
4.6× fastest in field

Key Findings

  • Top-of-field output consistency: ranks #1 of 32 at 99.6, the strongest stability score in the cohort — repeatable results across runs.
  • Leader-tier behavior profile: Instruction Following #6 (93.5) and Refusal Calibration #5 (92.2) indicate disciplined, predictable responses.
  • Extraction is the weak link: ranks #21 of 32 at 82.5, roughly 10 points below its Analysis and Writing scores.
  • Outranked by cheaper siblings: GPT 5.1 (#5, $0.0100) beats GPT 5.2 on quality at a lower cost and faster 7.6s response time.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. GPT 5.2 highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. GPT 5.2 highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

GPT 5.2 Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
92.5
#6Leader
ExtractionField accuracy, null handling, format compliance, zero fabrication
82.5
#21Contender
SummarizationCompression quality, key-point retention, length compliance
90.8
#15Strong
WritingTone, structure, persuasion, audience adaptation
91.5
#8Leader
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
92.4
#13Strong
Instruction FollowingConstraint adherence, format compliance, multi-part directives
93.5
#6Leader
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
92.2
#5Leader
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
99.6
#1Leader

Strengths and Cautions

Strengths

  • Best-in-cohort consistency (#1, 99.6): the most reliable output stability of any model tested, valuable for workflows requiring repeatable results.
  • Analysis and Writing in leader tier: Analysis #6 (92.5) and Writing #8 (91.5) support knowledge-work tasks like research synthesis and long-form drafting.
  • Disciplined refusal behavior (#5, 92.2): calibrates appropriately on sensitive prompts without over-refusing, suitable for customer-facing deployments.

Cautions

  • Extraction at #21 (82.5): underperforms on structured data pulls — not the right pick for invoice parsing, form extraction, or table-to-JSON pipelines.
  • Cost rank #23 ($0.0105): more expensive than GPT 5.1 (#5, $0.0100) and Gemini 2.5 Pro (#6, $0.0083), both of which score higher on quality.
  • Mid-pack speed (#19, 10.9s): slower than GPT 5.1 (7.6s) for similar workloads, which compounds in high-volume scenarios.

Head-to-Head: Frontier Models

GPT 5.2 is OpenAI’s mid-tier leader in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
Claude Opus 4.692.8#4#29#26
GPT 5.192.7#5#22#13
Gemini 2.5 Pro92.0#6#19#27
GPT 5.291.9#7#23#19
GPT 5.490.8#8#25#23
Claude Opus 4.590.7#9#28#15
GPT 5 Mini89.6#10#14#31

When to Use GPT 5.2

Best pickRepeatable analytical workflows where output consistency across runs matters — reporting, scoring, classification pipelines.
Best pickLong-form writing and analysis tasks needing leader-tier quality with predictable instruction adherence.
ConsiderGeneral-purpose assistant deployments — solid all-rounder, but verify GPT 5.1 isn't a better fit at lower cost and faster speed.
AvoidStructured data extraction tasks — at rank #21, twenty models extract more reliably.
AvoidCost-sensitive high-volume batch jobs — cost rank #23 makes cheaper performers like GPT 5 Mini or Gemini 2.5 Pro more economical.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32