ORCFLO Logo
ORCFLO Index
Model Evaluation: Claude Haiku 4.5May 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates Claude Haiku 4.5 in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

AnthropicOn ORCFLO

Claude Haiku 4.5

The Bottom Line

Claude Haiku 4.5 lands at #20 of 32 on overall quality (84.72), placing it well behind frontier leaders like GPT 5 (95.04), Gemini 3 Pro (93.17), and stablemate Claude Opus 4.6 (92.78). Its appeal is not raw quality but the bundle: a competitive cost rank (#13 at $0.0031/case) paired with respectable 5.2-second response times (#11). The takeaway: Haiku 4.5 is a workhorse for routine throughput, not a model to deploy where output quality is the binding constraint.

Quality
84.7
#20 of 323-Contender
-1.2 vs median · -10.3 from #1
Cost
0.5×median
$0.0031 per case
#13 of 322-Standard
36× cheapest in field
Speed
0.5×median
5.2s per case
#11 of 322-Quick
2.2× fastest in field

Key Findings

  • Mid-pack quality with low-cost positioning: overall score of 84.72 trails 19 other models, but cost per case of $0.0031 (rank #13) and 5.2s latency (rank #11) make it economically defensible for high-volume workloads.
  • Instruction Following is the weakest dimension at 81.7 (rank #29 of 32) — a material gap for workflows that depend on precise adherence to formatting or procedural rules.
  • Output Consistency is the standout metric at 92.7 (rank #12), suggesting predictable behavior across repeated runs even if peak quality lags.
  • Summarization underperforms peer abilities at 84.5 (rank #25, trailing tier), notably weaker than its Writing (89.0, #15) and Analysis (87.3, #16) scores.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. Claude Haiku 4.5 highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. Claude Haiku 4.5 highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

Claude Haiku 4.5 Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
87.3
#16Strong
ExtractionField accuracy, null handling, format compliance, zero fabrication
84.5
#15Strong
SummarizationCompression quality, key-point retention, length compliance
84.5
#25Trailing
WritingTone, structure, persuasion, audience adaptation
89.0
#15Strong
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
89.6
#18Contender
Instruction FollowingConstraint adherence, format compliance, multi-part directives
81.7
#29Trailing
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
68.5
#23Contender
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
92.7
#12Strong

Strengths and Cautions

Strengths

  • Consistency and predictability: Output Consistency of 92.7 (rank #12 of 32) places it in the strong tier, useful for production pipelines where variance is costly.
  • Writing and Analysis hold up: 89.0 in Writing (#15) and 87.3 in Analysis (#16) deliver competent results for drafting, structured reasoning, and exploratory tasks.
  • Cost-speed combination: $0.0031 per case (#13) and 5.2-second responses (#11) outperform its closest quality peers Claude Sonnet 4.5 ($0.0088, 10.3s) and o4-mini ($0.0092, 7.2s) on both axes.

Cautions

  • Instruction Following at rank #29 (81.7) is a significant liability for compliance, structured-output, and agentic workflows where rule adherence is non-negotiable.
  • Summarization weakness (84.5, #25 of 32, trailing tier) makes it a poor fit for executive briefings, document distillation, or report-generation pipelines.
  • Refusal Calibration of 68.5 (#23) indicates inconsistent judgment on when to engage versus decline, which can frustrate end users in customer-facing deployments.

Head-to-Head: Frontier Models

Claude Haiku 4.5 is Anthropic’s cost-efficient mid-tier option in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
o4-mini85.9#17#21#12
Claude Sonnet 485.5#18#17#14
GPT 4.185.0#19#15#10
Claude Haiku 4.584.7#20#13#11
Claude Sonnet 4.584.6#21#20#17
Mistral Large 3 (2512)83.2#22#11#21
Gemini 2.0 Flash83.1#23#5#4

When to Use Claude Haiku 4.5

Best pickHigh-volume drafting and content generation where Writing quality (89.0) and low per-case cost matter more than top-tier polish.
Best pickRepeatable analytical tasks needing consistent output formatting, leveraging the rank #12 Output Consistency score.
ConsiderMid-complexity extraction pipelines (84.5, rank #15) — acceptable if downstream validation catches the occasional miss.
AvoidStrict instruction-following workflows: rank #29 of 32 (81.7) makes it unreliable for templated outputs or multi-step procedural prompts.
AvoidSummarization-heavy use cases such as executive digests or meeting recaps, where its rank #25 score lags the field meaningfully.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32