All Benchmarks
ORCFLOORCFLO
ORCFLO Index
Model Evaluation: Claude Opus 4.6April 13, 2026
Research Note: claude-opus-4-6-2026-04-13

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across 40 real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates Claude Opus 4.6 in the context of 39 models from Anthropic, OpenAI, Google, Mistral, and others.

Claude Opus 4.6

The Bottom Line

Claude Opus 4.6 ranks #2 overall in quality across 39 models. It's a top-tier performer, placing in the Leader quartile with particular strength in Output Consistency (99.2) and Hallucination (94.5). It's premium-priced (#31 in cost) and slower (#28 in speed).

Quality
90.6
#2 of 39
Cost
0.023
#31 of 39
Speed
19.7s
#28 of 39

Key Findings

  • #1 in Output Consistency. Claude Opus 4.6 scores 99.2 in this category, placing it in the Leader tier across all 39 models.
  • #5 in Hallucination. Scores 94.5, reinforcing strength across multiple Abilities categories.
  • Cost: 0.023 credits for all 40 tests. Ranks #31 of 39 in cost efficiency. Average response time: 19.7s (rank #28).
  • vs. Claude Opus 4.5: 2.3 points ahead of in overall quality. Slower by 8.0s on average.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 39 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across our full basket of 40 real-world business tasks, alongside the total cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Quality score vs. cost to complete all 40 tests
Figure 1. Quality vs. cost across all 39 models. Upper-left quadrant = highest value. Claude Opus 4.6 highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete all 40 tests. Models in the upper-left deliver the best quality with the least delay.

Quality score vs. time to complete all 40 tests
Figure 2. Quality vs. response time across all 39 models. Upper-left quadrant = best performance. Claude Opus 4.6 highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle tasks that professionals actually encounter: strategic analysis, document extraction, business writing, summarization, and behavioral reliability. Scores are averaged across five cases per category, and each category is ranked independently across all 39 models.

Claude Opus 4.6 Performance by Category
CategoryScoreRankTiervs. Claude Opus 4.5
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
Analysis Reasoning, strategic judgment, disqualifying-factor detection
93.4
#3Leader+5.5
Extraction Field accuracy, null handling, format compliance, zero fabrication
78.9
#18Strong−9.8
Summarization Compression quality, key-point retention, length compliance
88
#7Leader+19.2
Writing Tone, structure, persuasion, audience adaptation
89.5
#3Leader−0.0
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
Hallucination Fabrication detection, factual grounding, source fidelity
94.5
#5Leader+2.3
Instruction Following Constraint adherence, format compliance, multi-part directives
88.8
#4Leader−2.2
Refusal Calibration Appropriate refusal vs. over-refusal on legitimate requests
92.6
#7Leader+3.5
StabilityRepeatability and predictability across identical inputs.
Output Consistency Run-to-run reproducibility, format stability, score variance
99.2
#1Leader+0.2

Strengths and Weaknesses

Strengths

  • #1 in Output Consistency — scores 99.2, placing in the Leader tier. Run-to-run reproducibility, format stability, score variance
  • #5 in Hallucination — scores 94.5, placing in the Leader tier. Fabrication detection, factual grounding, source fidelity
  • 19.7s average response — rank #28 in speed, suitable for most non-real-time workflows.

Cautions

  • Premium pricing — 0.023 credits for all 40 tests (rank #31 of 39). Significantly more expensive than mid-tier alternatives.
  • Moderate cost — 0.023 credits (rank #31). Cheaper alternatives exist with competitive quality.

Head-to-Head: Frontier Models

Claude Opus 4.6 is Anthropic’s top-performing frontier model in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 39-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed RankTier
Claude Opus 4.690.6#2#31#28Leader
GPT 5.2 Pro90.3#3#35#35Leader
Gemini 3 Pro (Preview)89.9#5#20#30Leader
GPT 589.7#6#32#34Leader
GPT 5.189.5#7#25#25Leader
GPT 5.488.7#8#28#24Leader
Claude Opus 4.588.3#9#29#18Leader
Claude Opus 4.787#11#33#22Strong
Gemini 2.5 Pro86.8#12#21#29Strong
Gemini 3 Flash (Preview)85.2#15#13#17Strong
Claude Sonnet 4.684.7#16#26#23Strong

When to Use Claude Opus 4.6

Best pickOutput Consistency tasks where quality is the primary criterion — rank #1 of 39.
Best pickHallucination tasks — also a Leader at rank #5.
ConsiderGeneral-purpose workflows where a balance of quality, cost, and speed matters — evaluate against peers ranked nearby.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. All four judge models (Gemini 2.5 Pro, Claude Opus 4.6, GPT 5.2, and Mistral Large) are also contestants in the benchmark, which introduces potential scoring bias. We actively monitor for judge bias to ensure that no judge can systematically advantage its own provider's models. Cost and speed measurements reflect API pricing and latency as of the test date and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.6, GPT-5.2, Mistral Large. Each judge scores independently. Final score is the average across all four judges.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — because composite scores hide the tradeoffs that drive model selection decisions.
Tier Definitions
Leader
Top 25%
Ranks 1–10
Strong
25–50%
Ranks 11–20
Contender
50–75%
Ranks 21–30
Trailing
Bottom 25%
Ranks 31–39

Full methodology, case descriptions, and historical results are available at orcflo.com/explore/models.