Pablo Rivera Conde

Capability vs. Judgment: Use AI Right

Ian Malcolm had it right in Jurassic Park:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

That’s the story of AI adoption in software engineering right now.

The 1000x Myth

Walk through any tech community, and you’ll hear the claims: teams shipping 100x, 1000x faster with AI. Three-month projects now done in weeks. It sounds incredible. Let’s be fair, technically it’s possible.

But that’s my hot take: most of those claims are incomplete.

What’s being shipped at that velocity is technical debt, security shortcuts, and architectural decisions optimized for speed, not longevity. The hard part of software engineering was never writing code, it’s knowing how code should be written. Understanding design patterns, maintainability, scalability, the trade-offs between velocity and durability. Those learnt lessons are what separate senior engineers from the rest.

Where I’m Seeing AI Win

I’m currently architecting a project that would take 2-3 months of human-crafted development to implement properly. I’m not writing the code yet, but I know this development will be done in one week.

I’m using Claude to help me organize architectural concepts, validate ideas against different patterns, and generate comprehensive ticket lists. I could have accepted one of the first versions Claude provided, as it correctly encapsulated what the system needs to do, and it would have worked. But “works” isn’t the bar for projects built to scale.

This is where experience matters. I iterate: I expose flaws in the initial design, I rewrite sections, I push back on the AI’s assumptions. I know what I want and how it should be done. I accept AI ideas when they make sense, but every decision is being put on the line.

Now I know I have a system designed to be maintained for years. And it still is going to be delivered fast. There is no dichotomy on fast vs. architecturally strong. We spend 4-6 days more? It’s totally worth it.

The Real Multiplier

People claim: “We detect issues fast and fix them with AI in seconds.” Sure. But your customers aren’t paying 80% uptime. They’re buying reliability. The reputational cost of constant firefighting (even if you fix it quickly) compounds.

The actual multiplier isn’t speed. It’s speed with quality. And that only happens with judgment.

In my own work with Claude, I’ve noticed something: the code quality I’m producing now exceeds what I was writing before AI. Not because the AI writes better code than me. I’ve just eliminated busywork.

I’m not writing boilerplate and then cleaning it up. I’m investing that reclaimed time into architecture, design decisions, and edge cases. I ask a thousand questions about how things should look. And when the technical decisions are clear, the AI-written code has huge quality—because all the rationale is already locked in from my previous phase. The development becomes trivial.

That only works because I don’t trust the AI to make those decisions alone.

What This Means for Your Engineering

The next time you see someone claim 1000x productivity gains, ask yourself: at what cost?

Real leverage with AI comes from knowing when to use it and when not to. When to trust its output and when to replace it. That’s not a skill junior engineers have yet. It’s what principals develop through years of shipping systems that have to work, not just work once.

The question isn’t: “How fast can AI make me?”

The question is: “How can I use AI to make better decisions, faster, while keeping ownership maintaining the quality that separates engineering from chaos?”

That’s where the real 1000x is hiding.

#genai #productivity #technical-debt #software-quality #engineering-culture

Reply to this post by email ↪