The ORCFLO Indexis an independent benchmark that evaluates large language models the way business professionals actually use them — across real-world tasks spanning analysis, writing, extraction, summarization, and behavioral reliability. Each model is scored on three dimensions (quality, cost, and speed) by a panel of four independent judges. This report evaluates Gemini 3 Flash (Preview) in the context of 32 models from ORCFLO Index — May 10, 2026 cohort · 32 total models tested · Anthropic, Google, OpenAI, Mistral.
Gemini 3 Flash (Preview)
Gemini 3 Flash (Preview) lands at #11 of 32 on overall quality (88.70), trailing all five OpenAI GPT-5 variants, both Gemini Pro entries, and Anthropic's Opus 4.5/4.6 in the leader band. What distinguishes it is the combination: #12 on cost ($0.0020 per case) alongside leader-tier marks in Summarization (#4) and Instruction Following (#4). It is the cheapest model in the top dozen by quality, with one significant caveat — Output Consistency ranks #26.
Key Findings
- Leader-tier on two dimensions: Summarization scores 95.3 (#4 of 32) and Instruction Following hits 94.1 (#4 of 32), placing Flash in the same band as GPT 5 and Gemini 3 Pro on these specific tasks.
- Strongest price-to-quality ratio in its rank neighborhood: At $0.0020 per case, Flash is roughly 7x cheaper than Claude Opus 4.5 (#9, $0.0156) and 13x cheaper than Claude Opus 4.7 (#12, $0.0262) while scoring within 2 points of either.
- Output Consistency is the clear outlier: 81.7 (#26 of 32, trailing tier) is the only sub-strong score on the card and the lowest among all top-15 quality models.
- Speed is mid-pack, not a differentiator: 10.2 seconds (#16 of 32) is competitive but matched or beaten by cheaper alternatives such as Gemini 2.5 Flash (10.7s, $0.0004).
Model Performance: Quality & Cost
The chart below plots quality against cost for all 32 models in the ORCFLO Index. Each dot represents the average quality score a model achieved across the full basket of real-world business tasks, alongside the cost in credits to complete the entire test suite. Models in the upper-left quadrant deliver the highest quality at the lowest cost.
Model Performance: Quality & Time Elapsed
Quality alone doesn’t tell the full story — response time determines whether a model is viable for time-sensitive workflows. The chart below plots each model’s quality score against the total time required to complete the test suite. Models in the upper-left deliver the best quality with the least delay.
Category Scorecard
The ORCFLO Indexevaluates models using real-world business tasks — not academic puzzles or synthetic benchmarks. Each test case is designed to expose specific differences in how models handle the work professionals actually do. Scores are averaged across each category and ranked independently across all 32 models.
| Category | Score | Rank | Tier |
|---|---|---|---|
| Abilities — Core language tasks: what the model can produce when given a well-formed prompt. | |||
| AnalysisReasoning, strategic judgment, disqualifying-factor detection | #12 | Strong | |
| ExtractionField accuracy, null handling, format compliance, zero fabrication | #11 | Strong | |
| SummarizationCompression quality, key-point retention, length compliance | #4 | Leader | |
| WritingTone, structure, persuasion, audience adaptation | #14 | Strong | |
| Behaviors — How the model acts under pressure: reliability, compliance, and restraint. | |||
| HallucinationFabrication detection, factual grounding, source fidelity | #15 | Strong | |
| Instruction FollowingConstraint adherence, format compliance, multi-part directives | #4 | Leader | |
| Refusal CalibrationAppropriate refusal vs. over-refusal on legitimate requests | #16 | Strong | |
| Stability — Repeatability and predictability across identical inputs. | |||
| Output ConsistencyRun-to-run reproducibility, format stability, score variance | #26 | Trailing | |
Strengths and Cautions
Strengths
- Summarization (95.3, #4 of 32, leader tier): Ranks ahead of GPT 5.1, GPT 5.2, and Claude Opus 4.5 on this category despite a much lower price point.
- Instruction Following (94.1, #4 of 32, leader tier): Top-five placement makes it a reliable choice for structured workflows where prompt adherence matters more than raw reasoning depth.
- Cost discipline at a top-15 quality level: $0.0020 per case (#12) is the lowest cost among all models ranked #1–#11 on quality, beating GPT 5 Mini ($0.0041, #14) by roughly half.
Cautions
- Output Consistency is trailing-tier (81.7, #26 of 32): Repeated calls on the same prompt can produce materially different responses — a problem for regression testing, audit trails, or templated pipelines.
- Refusal Calibration is the weakest behavior score (79.0, #16): Mid-pack handling of sensitive or ambiguous requests; expect occasional over- or under-refusal in regulated or customer-facing settings.
- Not a substitute for top-tier reasoning models: Analysis (91.9, #12) and Extraction (86.7, #11) are strong but trail GPT 5, Gemini 3 Pro, and Claude Opus 4.6 by 3–6 points on the harder cognitive categories.
Head-to-Head: Frontier Models
Gemini 3 Flash (Preview) is Google’s cost-efficient mid-tier pick in the ORCFLO Index. The table below compares it against the top-performing models from each major provider. Tier assignments use 25% quartiles across the full 32-model field.
| Model | Quality Avg | Quality Rank | Cost Rank | Speed Rank |
|---|---|---|---|---|
| GPT 5.4 | 90.8 | #8 | #25 | #23 |
| Claude Opus 4.5 | 90.7 | #9 | #28 | #15 |
| GPT 5 Mini | 89.6 | #10 | #14 | #31 |
| Gemini 3 Flash (Preview) | 88.7 | #11 | #12 | #16 |
| Claude Opus 4.7 | 88.6 | #12 | #30 | #25 |
| Gemini 2.5 Flash | 88.4 | #13 | #7 | #18 |
| Claude Sonnet 4.6 | 88.2 | #14 | #24 | #24 |
When to Use Gemini 3 Flash (Preview)
The ORCFLO Index
This evaluation covers 40 cases across 8 categories. All tasks are text-only and English-only. Code generation, multi-turn conversation, multimodal tasks, and agentic workflows are not tested. Each contestant is scored by a panel of four independent judges — Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large — with final scores averaged across all four. Cost and speed measurements reflect API pricing and latency as of the test date (May 10, 2026) and will change as providers update their offerings.
How We Test
The ORCFLO Indexevaluates large language models across three independent dimensions — quality, cost, and speed — using real-world business tasks designed to expose the differences that matter for model selection. Each model is scored by a panel of four independent judges to reduce single-model bias.
- Test Cases
- 40 cases across 8 categories spanning Abilities (Analysis, Extraction, Summarization, Writing), Behaviors (Hallucination, Instruction Following, Refusal Calibration), and Stability (Output Consistency).
- Judge Panel
- Gemini 2.5 Pro, Claude Opus 4.7, GPT 5.5, and Mistral Large. Each judge scores independently. Final score is the average across all four.
- Scoring
- Three independent ranks: quality (higher is better), cost (lower is better), speed (faster is better). No composite score — composites hide the tradeoffs that drive model-selection decisions.
- Tier Definitions
- LeaderQuality ≥ 90.8Ranks 1–8Strong≥ 85.9Ranks 9–16Contender≥ 80.9Ranks 17–24Trailing< 80.9Ranks 25–32