Skip to content

Execution Is Cheap. Judgment Isn't: AI Agents and the Collapse of the CTO/CPO Divide

Published: at 10:00 AMSuggest Changes

Last week, my team shipped the first version of an internal AI agent framework we built in two weeks. The kind of infrastructure that would have taken us three to four months a year ago.

My first reaction wasn’t satisfaction. It was a quiet, uncomfortable question: if we can build that fast, what’s the bottleneck now?

The answer forced me to rethink something I thought was settled.


The CTO/CPO split was a solution to a different problem

For most of the last decade, building software was the scarce resource. Large teams, long cycles, significant capital. Architecture decisions were expensive to reverse. Product bets had feedback loops measured in quarters. In that world, separating engineering leadership from product leadership made sense. One role optimized how. The other optimized what and why. The separation wasn’t dysfunction. It was an honest response to the cost structure of software.

AI is dismantling that cost structure.

Agents can now scaffold production-ready systems in hours. Prototyping cycles that once took weeks take days. When execution becomes abundant, the bottleneck doesn’t disappear. It shifts upward. To prioritization. To problem definition. To judgment about what’s worth building at all.

Here’s the part that stings: AI doesn’t compress the cost of building the wrong thing. It amplifies it. When your team ships faster than ever, poor direction compounds faster than ever. Speed of execution exposes the weakness of judgment.

In that environment, the most important architecture in a company is not the system’s codebase. It is the system of decisions.


When the bottleneck shifts, the org chart should too

When we built that agent framework in two weeks, the immediate win was obvious. But what became clear almost immediately was that the hard problem had simply moved. Now the question wasn’t “can we build this?” It was “what should this framework actually do, for which workflows, under what constraints, and what does it mean if it gets something wrong?”

That last part matters especially in compliance work. An AI agent operating in a GRC context that acts with too much autonomy creates regulatory exposure. Too little autonomy and you’ve built expensive automation that still requires constant hand-holding, which destroys the value proposition entirely.

I faced this directly when we were designing a suggestion feature that uses retrieval-augmented generation. The core technical decision was whether to optimize for recall or precision. Recall means the model surfaces more candidates, at the risk of including false positives. Precision means it surfaces fewer, but what it surfaces it’s confident about.

In most contexts, recall is the reasonable default. You can always filter downstream. But this feature was surfacing compliance suggestions to enterprise teams where a false positive isn’t just noise. It’s a risk signal that erodes trust, and in regulated environments, eroded trust is very hard to recover. In this context, precision wasn’t a preference. It was a strategic necessity.

That decision required technical knowledge to understand the retrieval tradeoff, and product instinct to understand what a false positive actually costs the user. Neither alone was enough. The decision required holding both mental models at the same time. The retrieval tradeoff, and the user’s perception of risk.

This is the pattern I keep running into. The decisions that matter most in an AI-native product aren’t cleanly engineering or cleanly product. They’re both, at the same time.


Build the car, not the faster horse

Here’s a different kind of example, one that has nothing to do with AI but everything to do with why the CPTO model matters.

For a long time, users of our platform were asking for the ability to add generic references between entities. The request came up repeatedly. It was consistent. It had strong signal.

It was also technically easy. A generic “add reference” link between any two entities in the system. Think Jira letting you link a Task to an Epic, or a Story to a Bug, with no enforced meaning. We could have shipped it in a week.

Something kept feeling wrong. I sat with it for a while, ran several discussions, ran some brainstorms. The problem with generic references is that they carry no semantics. They tell you that two things are related, but not how or why. In a compliance platform, where the entire value is about understanding relationships between controls, risks, frameworks, and evidence, a relationship without meaning is almost worse than no relationship at all. Users would link things and then not know what to do with those links.

What we built instead was a system of typed relationships. Every connection between two entities has a defined meaning, a context, business logic attached to it. It takes more thought to design and more effort to build. It also actually solves the problem.

Users were asking for a faster horse. They needed a car. The engineer in me knew we could build the horse quickly. The product side knew the horse was the wrong answer. The only reason we didn’t ship the wrong thing was that those two instincts were in the same person at the moment of the decision.


This was not a title transition for me

I want to be careful here, because “CPTO” can sound like a prestige move, two roles stacked into one title. That’s not what I mean, and it’s not how I got here.

When I was running meuingresso.com in Brazil, there was no product team and no engineering team. There was survival. Every onboarding decision affected conversion rates. Every infrastructure choice affected whether the platform held up during events with 15,000 attendees. Pricing, UX, performance, and architecture were the same problem from different angles. I didn’t separate them because I couldn’t afford to.

When I moved to Berlin to lead engineering teams, I went deep into technical leadership: zero-to-one builds, scaling teams, architectural decisions under real uncertainty. I loved it. But I was never satisfied with “can we build this?” as the terminal question. The questions I couldn’t let go of were: will this change user behavior? Does this create defensibility? What does this make impossible for us later?

When I eventually settled fully into a CPTO role, the title caught up to how I was already operating. The shift wasn’t mental. It was structural.

The real job became designing leverage. Deciding what not to build. Connecting user feedback to architectural constraints. Building AI capabilities that amplify teams rather than just automating their existing workflows.


What this means if you’re building now

The CTO/CPO divide made sense when software was primarily an execution challenge. That world is shrinking fast for companies building AI-native products.

For founders: if you’re early enough to shape the org, resist the instinct to separate these concerns too soon. The decisions AI forces you to make don’t respect clean organizational boundaries. How much autonomy to grant, what the system does when it’s uncertain, what a wrong answer costs to the user.

For engineering leaders: the transition isn’t about learning to do roadmaps. It’s about internalizing that technical decisions are business decisions, and asking both questions before making either call. You’re probably already doing this informally. The question is whether you’re doing it loudly enough, and whether the organization is structured to let you.

For product leaders: understanding what agents can and can’t do reliably in production, what failure modes look like at scale, and what “autonomous” actually costs in terms of infrastructure and oversight: that’s not optional domain knowledge anymore. It shapes what you can promise and what you should ship.

The companies that will move fastest aren’t the ones with the most capable agents. They’re the ones where the gap between judgment and execution is nearly zero.

Execution is increasingly automated. The judgment about what to execute is not.

That’s where leadership lives now.


I’m CPTO at Complyance, building AI-first compliance software. I also build autonomous agents on the side for personal automation and learning. Wrote about some of that here. Find me on LinkedIn or X.


Next Post
The OWASP Top 10 for AI Agents - Security in the Age of Autonomy