ORCFLO Logo
ORCFLO Index
Model Evaluation: Gemini 2.0 FlashMay 10, 2026

The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates Gemini 2.0 Flash in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.

Google

Gemini 2.0 Flash

The Bottom Line

Gemini 2.0 Flash lands at #23 of 32 on quality (83.08), well behind frontier leaders like GPT 5 (95.04) and Gemini 2.5 Pro (91.99), but its real story is economics: #5 on cost ($0.0003/case) and #4 on speed (4.0s). Extraction performance reaches the leader tier at #7, while analysis and writing trail in the bottom third. The model is a high-throughput utility pick, not a reasoning workhorse.

Quality
83.1
#23 of 323-Contender
-2.8 vs median · -12.0 from #1
Cost
0.0×median
$0.0003 per case
#5 of 321-Budget
3.5× cheapest in field
Speed
0.4×median
4.0s per case
#4 of 321-Fast
1.7× fastest in field

Key Findings

  • Extraction is the standout capability at 90.9 (rank #7 of 32, leader tier) — competitive with models costing 10–30× more per case.
  • Cost-speed profile is among the field's best: $0.0003/case (#5) paired with 4.0s response time (#4), making it suitable for volume workloads.
  • Analysis (74.7, #24) and writing (76.8, #23) lag materially, placing the model in the contender tier for tasks requiring synthesis or nuanced prose.
  • Output consistency ranks #27 of 32 at 81.5 (trailing tier), a meaningful concern for repeatable, production-grade pipelines.

Model Performance: Quality & Cost

The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.

Figure 1. Quality vs. cost across all 32 models. Upper-left quadrant = highest value. Gemini 2.0 Flash highlighted. P50 median lines shown on both axes.

Model Performance: Quality & Time Elapsed

Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.

Figure 2. Quality vs. response time across all 32 models. Upper-left quadrant = best performance. Gemini 2.0 Flash highlighted.

Category Scorecard

The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.

Gemini 2.0 Flash Performance by Category
CategoryScoreRankTier
AbilitiesCore language tasks: what the model can produce when given a well-formed prompt.
AnalysisReasoning, strategic judgment, disqualifying-factor detection
74.7
#24Contender
ExtractionField accuracy, null handling, format compliance, zero fabrication
90.9
#7Leader
SummarizationCompression quality, key-point retention, length compliance
87.6
#21Contender
WritingTone, structure, persuasion, audience adaptation
76.8
#23Contender
BehaviorsHow the model acts under pressure: reliability, compliance, and restraint.
HallucinationFabrication detection, factual grounding, source fidelity
90.8
#16Strong
Instruction FollowingConstraint adherence, format compliance, multi-part directives
88.3
#19Contender
Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests
74.2
#20Contender
StabilityRepeatability and predictability across identical inputs.
Output ConsistencyRun-to-run reproducibility, format stability, score variance
81.5
#27Trailing

Strengths and Cautions

Strengths

  • Extraction leader tier (90.9, #7) — strong fit for structured data pulls, form parsing, and field-level capture at scale.
  • Top-5 cost economics ($0.0003/case, #5) — roughly 29× cheaper than Claude Sonnet 4.5 ($0.0088) at comparable overall quality.
  • Top-5 latency (4.0s, #4) — beats neighboring peers like Mistral Large 3 (13.4s) and Claude Sonnet 4.5 (10.3s) by a wide margin.

Cautions

  • Output consistency at #27 of 32 (81.5) — among the weakest in the cohort for stability, requiring validation layers in production.
  • Analysis at #24 (74.7) and refusal calibration at #20 (74.2) — both contender tier, limiting fitness for advisory or judgment-heavy workflows.
  • Summarization at #21 (87.6) trails most premium peers despite being a common Flash-tier use case; verify quality on representative samples.

Head-to-Head: Frontier Models

Gemini 2.0 Flash is Google’s cost-efficient utility pick in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.

Frontier Model Comparison
ModelQuality AvgQuality RankCost RankSpeed Rank
Claude Haiku 4.584.7#20#13#11
Claude Sonnet 4.584.6#21#20#17
Mistral Large 3 (2512)83.2#22#11#21
Gemini 2.0 Flash83.1#23#5#4
Gemini 2.5 Flash-Lite80.9#24#4#5
GPT 4.1 Mini80.4#25#9#9
Mistral Small 3 (24B)79.9#26#1#7

When to Use Gemini 2.0 Flash

Best pickHigh-volume extraction pipelines (invoices, forms, structured fields) where 90.9 extraction quality and $0.0003/case economics align.
Best pickLatency-sensitive customer-facing features needing sub-5-second responses at low marginal cost.
ConsiderRouting or triage layers in multi-model systems where speed and price matter more than peak reasoning quality.
AvoidStrategic analysis, research synthesis, or executive-grade writing — rank #23–24 quality on those tasks is insufficient.
AvoidWorkflows requiring deterministic, repeatable outputs — consistency rank #27 introduces unacceptable variance.

The ORCFLO Index

This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.

How We Test

The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.

Test Cases
40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
Judge Panel
Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
Scoring
Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
Tier Definitions
Leader
Quality ≥ 90.8
Ranks 1–8
Strong
≥ 85.9
Ranks 9–16
Contender
≥ 80.9
Ranks 17–24
Trailing
< 80.9
Ranks 25–32