Most AI Teams Ship Confidently Into the Void
Prototyping as Discovery: Treating Problem Understanding Like an Asset
#beyondAI
There’s a quiet assumption in most tech teams that feels so natural we rarely stop to question it:
Code is considered an asset. Problem understanding is not.
That single mental model — largely unspoken — silently shapes the way AI products are built. It influences what gets prioritized, who gets funded, how progress is measured, and what success looks like.
And more often than not, it’s also the reason why AI products miss the mark — not because the code was poorly written, or the model undertrained, but because we built something scalable and sophisticated… that nobody actually needed.
When Code Becomes the Hero — and Discovery Gets Forgotten
In most organizations, especially those driven by delivery milestones, code is treated as proof of progress. It’s visible. It’s documented. It’s reusable. It shows up in sprint reviews, gets archived in Git, and lives on in the roadmap. People get excited about it because it feels like something you can touch, something you can show — an asset that endures.
This perception is deeply ingrained in how product and engineering teams operate. Teams celebrate pull requests and production pushes. Roadmaps are mapped in epics and features. Burn-down charts show velocity, and demos show outcomes. We build. We deliver. We optimize.
But if we zoom out for a second, we have to ask:
Build what? Deliver why? Optimize toward what pain?
That’s where things start to unravel — and where discovery enters the picture.
The Invisible Work of Problem Understanding
Discovery is where everything starts.
But in most companies, it’s not treated like part of the real work.
It’s often a prelude, something done at the beginning of a project and then left behind. A few user interviews. A canvas workshop. Some sticky notes in Miro. Maybe even a thorough slide deck with pain points and opportunity spaces.
And then? It vanishes.
Even when discovery is done well, it rarely gets maintained, versioned, or reused. It doesn’t live inside the product backlog or inform quarterly OKRs. It’s not tracked like code, logged like bugs, or celebrated like a successful deployment.
That’s the real risk.
If discovery isn’t seen as an asset —
it doesn’t get time.
It doesn’t get attention.
It doesn’t get the company’s best minds.
And in AI Product Management, that’s not just unfortunate — it’s dangerous.
The Supposed Forgiveness of Software Development
People like to say that in classic software development, you can get away with unclear discovery.
You ship a feature, see what happens, tweak it, and eventually land on something usable.
The cycle is iterative. The stakes are lower. It’s all very forgiving.
It’s not forgiving. It’s just familiar failure.
We’ve normalized teams shipping into the dark and hoping that agile rituals will save them later.
And most of the time, it doesn’t.
Unclear discovery in software leads to the same thing it does in AI:
Wasted resources, lost time, and user problems that remain unsolved.
The difference?
AI is more expensive.
• The cost of experimentation is higher.
• The time to validate is longer.
• And the risk of eroding trust — through wrong answers, hallucinations, or unfair behavior — is much harder to recover from.
So no, it’s not that AI is more fragile than regular software.
It’s just that the price of misunderstanding the problem is paid up front — and in full.
It’s more fragile, more data-dependent, and less predictable.
You can’t always iterate your way out of a bad starting point.
You can’t just refactor your prompts and magically land in product-market fit.
And you definitely can’t backtest your way into solving a real human problem if you didn’t deeply understand the problem in the first place.
This is why so many AI teams ship confidently into the void.
They’re moving fast. They’re technically capable.
But their code is built on sand — assumptions that were never validated, pain points that were never fully understood, users that were never truly involved.
The result?
A beautiful solution to the wrong problem.
Why the Code Survives — Even When the Product Fails
The irony in most failed AI products is that the code survives.
It lives on in version control. It gets reused in other experiments. It becomes a library, a model, a reference.
But the discovery — the messy, human understanding that should’ve guided the build — is lost.
And that’s why I started asking myself:
What if we could treat discovery like we treat code?
What if problem understanding was also seen as an asset — not just a phase?
I didn’t want to fight the delivery mindset.
I wanted to work with it.
And that’s how I landed on a shift in thinking I now call:
Prototyping as Discovery
It’s not a new idea, but one not widely spread in the AI dev space.
Most people think of prototyping as a way to test a technical solution, not a user solution.
But what if the goal wasn’t to test your build — but to explore your understanding?
Prototyping as Discovery is a mindset shift.
It means building not to ship, but to learn. And yes — it goes by many names.
It’s about treating early product increments as strategic probes — ways to uncover real user behavior, real constraints, real patterns in data and usage — not just to validate assumptions, but to uncover the ones we didn’t know we had.
It’s a way of embedding discovery inside delivery.
Not as a box to tick before dev starts, but as an ongoing process that grows alongside the code.
You discover while you build.
You build while you discover.
And both outcomes — the code and the insight — become valuable assets.
What It Looks Like in Practice
You don’t need to restructure your team or get buy-in for a whole new methodology. You just need to start treating early cycles as insight engines.
Here’s one way I’ve framed it:
• 2 weeks of focused discovery: interviews, workshops, pain point mapping, data landscape review
• 6 weeks of dev: small prototype that targets the most promising problem with the lowest fidelity possible
• 2 weeks of follow-up discovery: observe what happened, run validation sessions, collect behavioral data
• 6 more weeks of dev: build on the insights and iterate toward a real solution
Each phase feeds the other.
Each step produces both code and context.
The understanding is just as valuable as the functionality.
Over time, this approach turns your discovery from a one-time effort into a continuously compounding asset.
And people know it as Continuous Discovery, coined by Teresa Torres.
Making Discovery Look Like an Asset
In many organizations, the challenge isn’t that people don’t believe in discovery — it’s that they don’t see it.
They don’t see it in dashboards.
They don’t see it in team reviews.
They don’t see it in the OKRs or the sprint metrics.
So part of the work is making discovery visible.
That might mean:
• Keeping a living repository of insights and opportunity spaces
• Capturing key learnings from prototypes as artifacts, not just lessons
• Measuring insight velocity alongside code velocity
• Including discovery-driven pivots in stakeholder updates
The more you surface these outcomes, the more credibility discovery builds — not as an idea, but as an investment.
Something worth time. Worth talent. Worth treating like code. Shareable throughout the entire company.
Why This Matters Now — Especially for AI
As AI continues to evolve, the space between technical capability and real-world impact is widening.
We have tools that can generate, classify, summarize, translate — almost instantly.
But what we often don’t have is clarity on what problems are truly worth solving.
When the tech gets easier to build, the temptation to skip discovery grows.
When everyone’s focused on prompts and models, understanding becomes the forgotten frontier.
That’s why this shift — from discovery as a phase to discovery as a product companion — is so critical.
Because in AI, it’s not enough to have working code.
We Need Working Insight
AI products don’t live or die by model performance alone.
They succeed when they solve something real.
They scale when they’re trusted.
They endure when they’re rooted in real understanding of a problem space, a user behavior, or a business gap.
And that understanding doesn’t happen by accident.
So maybe the real question isn’t:
“How do we get people to care about discovery?”
Maybe it’s:
“How do we make discovery look like an asset —
and deliver like one?”
That’s the shift I believe we need.
Not more discovery slides. Not more workshops.
But more problem understanding — embedded in the way we build.
And more building — designed to surface real insight.
So the next time someone asks you to ship quickly,
ask them what you’re shipping toward.
And if the answer is unclear?
Build something small.
Pointed.
Probing.
And let the discovery begin.
JBK 🕊️