What happens when everyone is a builder? What happens when the lines between roles like marketing, design, product management and software development blur, and everyone is now pushing code?

Two ways to think about it

I keep landing on two poles. I’m not sure that either is the focus, but both are instructive.

The first pole is constraint-driven:

  • Define safe zones in the architecture.
  • Scope agent tools to specific directories.
  • Sandbox the “non-software-development” builders so they can’t touch the things that “could break the system”.

This is the instinct most engineering leaders have, and it’s not wrong, it’s just incomplete. The safe zones have to intersect with areas where high-value work actually happens. If you’re sending people out to “play in the sandbox” (so to speak), you’ve created a very safe way to do busywork. That’s not the point.

The second is what I’ll call the YOLO approach or “damn the torpedoes”, which sounds reckless, but I’ve been preoccupied with the idea that it isn’t.

  • You let everyone build.
  • You invest your engineering energy not in gates and reviews but in system resilience.
    • post-deploy detection
    • automated feedback loops
    • PR and style normalization

You keep the money coming in and the system running, and you treat it all as a coordination problem, not a permissioning problem.

Most companies I talk to are already closer to this than they’d admit, and the systems aren’t falling over.

Review loops don’t scale

Here’s the thing nobody wants to hear: every review slows you down.

If the whole point of enabling non-technical builders is speed, and the point of speed is the possibility of more revenue, then routing everything through an engineer for review defeats the purpose. Your engineers are already reviewing each other’s work.

Add 10x more PRs from non-engineers, and they’re not building anymore, they’re reviewing.

Plus, the feedback they’re giving doesn’t even land the same way, because they’re talking to someone who owns the concept, not the code. The person who wrote the ticket can’t necessarily action a terse code review comment. They’ll just route it back through the agent, which may or may not converge.

So either the review becomes fully automated, as in agents checking agents, standards as code, deterministic enforcement, or you accept that the fail-safe isn’t at the review stage. It’s in the system itself.

For example, monitoring. Alerting on behavioral changes. Catching stuff in staging (or production) instead of in a PR comment thread.

Builder experience

I’ve been using this term for a while: “builder experience”. It’s developer experience, but the builders aren’t just developers.

What does a product manager need to know to build a feature end-to-end with an AI agent? Not how the backend works. Not the data model. Not the deployment pipeline.

They need to know the application and the market:

  • How it’s organized for users
  • What behaviors exist
  • What the hierarchy looks like
  • Their instinct of how to make money from this product with this audience

That’s their domain. That’s what makes them good at their job.

The agent handles the translation. The PM says “I want an A/B test on the onboarding flow.” The agent maps that to components, routes, feature flags, whatever the architecture requires. The PM doesn’t need a mental model of the architecture any more than I need to deeply understand the transmission to drive my car.

And here’s the harder part: forcing builders to establish that mental model is probably counterproductive.

As you go deeper into technical abstraction, non-technical people will build different and inconsistent interpretations. You’ll spend more time debugging the miscommunication and misunderstanding than you save by “aligning” everyone.

This implies that the engineering leadership job shifts. You’re not gatekeeping anymore. You’re building the platform that makes it easy for someone to stay in their domain and still ship something real. Clear boundaries, good tooling, agents that know the codebase well enough to translate between the product layer and the ugly underneath.

That’s builder experience. The companies that get it right are going to feel like great places to work for everyone, not just the engineers.

What breaks at 100x?

Here’s another question I don’t have an answer to.

What if this all works?

What if your team of 15 starts shipping at some point at the rate of 150? What does your codebase look like? What does your deployment pipeline handle? What happens when hundreds of agents are operating on the same source of truth, and that source of truth is changing constantly?

I’ve seen this in high-scale products over 50M DAU: the trivial corner case you see once a quarter becomes a daily problem. Scale problems can happen internally in systems as well as externally in terms of how or how many people are interacting with them.

The coordination problem, agents maintaining coherent context across a rapidly changing codebase, is unsolved. This is what Steve Yegge is pointing at when he talks about agent orchestration. It’s somewhat about making agents smarter. It’s really about making a hundred of them coherent.

For a 5-million-dollar company, the question is: can you punch above your weight class? Can 5 people ship like 50?

For a 50-million-dollar company, the question is scarier: do you embrace these technologies and try to handle where it leads, or do you resist?

Both states are uncomfortable.

What are you optimizing for?

Everything upstream from distribution is heading toward commoditization. Speed to build. Quality of code. Depth of features. If everyone (hypothetically) has access to the same infinite pool of agents and tokens, then the only thing that differentiates is how fast you can get in front of people and capture market.

So, the question isn’t whether to invest in constraints or resilience. It’s whether your engineering investment is accelerating distribution or decelerating it. If your review process, your architectural boundaries, your permissions model, if any of it is slowing you down more than it’s protecting you, it’s worth asking whether the protection is real or just comfortable.

I don’t have a clean answer, and I don’t think anyone does yet. But I know the companies that are asking the question honestly are going to adapt faster than the ones pushing performative certainty about a playbook that doesn’t exist.

We’re all figuring this out, the key is that we’re all honest about it.