ORCFLO Stealth Series: 2 of 6

The Chat Model is Too Constraining

The chat interface is great for impromptu questions, but it just doesn't scale for multi-step problems.

February 4, 20263 min readBy ORCFLO
company
The Chat Model is Too Constraining

Photo by A.Rahmat MN on Unsplash

ORCFLO began as a passion project in 2025, consuming our nights and weekends...becoming a real company in 2026. All this happened in an unplanned 'stealth mode'. This series explains the back-story of our journey.

Once we committed to building something new, we needed to get specific about what was actually wrong with the chat model. It wasn't enough to say "it's limiting" - we had to articulate exactly where those limits were so we could design something better.

We started by cataloging our frustrations. Two stood out immediately.

The Transparency Problem

When you ask ChatGPT or Claude a complex question, something happens in the background. Tokens are processed, context is evaluated, reasoning occurs. But from where we sit? It's a black box. We type a question, watch a cursor blink, and eventually an answer appears.

It feels a bit like talking to the Wizard of Oz. There's a lot of drama and impressive output, but we have no idea what's actually happening behind the curtain.

For simple questions, that's fine. We don't need to understand the mechanics of how Claude knows the capital of France. But when we're asking big, consequential questions - should we invest in this company? Is this candidate right for the role? What's the risk in this contract? - the opacity becomes a problem. It's hard to trust an answer when you can't see how it was produced. Did the AI consider the factors we care about? Did it make assumptions we'd disagree with? We simply don't know.

What we wanted was transparency. We want to define a series of specific steps to solving a problem, and then to have a window into each of those steps. We need to monitor and approve intermediate steps to have confidence in the final output.

The Tedium Problem

Here's the thing about chat interfaces: they're interactive. That's the point. You ask, it answers, you follow up, it responds. It's a conversation. But conversations don't scale.

When we figure out a good approach to analyzing a company - a sequence of questions that reliably surfaced the information we needed - we want to just run that sequence automatically. Instead, we have to type step one, wait for the response, review it, type step two, wait again, review again. Over and over.

Let's be honest, this is a first-world problem. Having an intelligent agent complete complex analytical work, even if we have to babysit it, is remarkable. A few years ago, this wasn't possible at any price.

But still. These are computers. They do things by themselves all the time. Our email filters spam automatically (most of the time). Our calendars send reminders without being asked. Why should AI be different? Why can't we define a workflow once and run it whenever needed?

What we wanted was orchestration. The ability to string together a series of tasks, start them running, and come back to see the results. Not step-by-step hand-holding, but something closer to setting a process in motion and letting it work.

We had two clear limitations on our list. But we weren't done yet. We kept finding more, as we'll explore in the next blog.

NEXT UP:
Step 3: The Chat Model is Too Constraining (continued)

Ready to automate your work?

Make your first workflow in minutes - no technical skill needed.

No credit card required
Free trial
Cancel anytime