Pablo Rivera Conde

Waterfall Was Right About One Thing

Lately I’ve been thinking about how AI fits into the whole product development workflow, how it can be a valuable step while engineers remain accountable for every line of code, without turning humans into a bottleneck. And I found an interesting experiment to run.

I was assigned to break a new project into actionable tickets. In the previous project, I had extracted tickets the way we usually do — small, easily reviewable pieces. At the same time, the team was leveraging AI in development. The result was a code-review bottleneck and a feature that felt mostly done but was scattered across multiple PRs. For this new project, I decided to take a different approach: make tickets a bit bigger; not separated by atomicity but by semantic area. I started iterating with Claude on the tickets and their descriptions, and within minutes I realized I was, in fact, elaborating full plans for each ticket.

During this phase I found myself finding edge cases, fixing potential bugs before they could occur, applying changes to the overall architecture when those changes are as cheap as possible (there is no code at all!). So I decided to go on.

All of a sudden, I saw it. This was doing Waterfall planning. I was already creating extensive development plans for a team to develop; but in this case the assignee would be Claude.

Waterfall

I extracted several tickets from the original spec. I let Claude give me an initial draft. Meanwhile, I reviewed the code and drafted my own high-level plan to compare against Claude’s. Once the plans were written in markdown, I started reviewing them file by file. Each decision was weighed against how I would approach it, and evaluated for its implications on overall architecture, maintainability, and future extensibility. I asked for alternative approaches, made updates to the plan, rinse and repeat.

Once I finished a file, I started the next one. In some cases I ran into a use case that made me rethink a decision; I would go back to the previous tickets and re-review everything. I gave almost no room for the AI to hallucinate. Vital implementation details were extensively specified, from edge cases to function signatures, to ensure development followed my expectations. Once it was done, I uploaded everything to the ticket management system. It was about 1200 lines of Markdown (where paragraphs are a single line).

I’m not gonna lie, the process felt dense and tiring. I needed a couple of workdays to be sure about everything.

As a bonus win, all reasoning and decisions are now persisted in the company ticket management system. If in the future we need to revisit the rationale behind any decision, it’s all there. Documentation is complete and created organically.

Development

At this point, I took care of the whole project myself. I was interested in how the AI would tackle the development, and to what extent this approach is shareable across the team — especially with individuals who might not be as familiar with the codebase as I am.

I started working on the tickets one by one and gave Claude a single instruction: Go check ticket #123. There you will find the plan to follow. Go ahead. Afterwards, I checked the resulting code, code-reviewed it myself, and tested the full feature (this is where I speeded up test phase with throwaway scripts to be sure everything was correct).

In this process I made adjustments myself and continued with the next ticket.

Total time: 4h.

QA

Once all features were done and I integrated the code with a colleague’s frontend, QA began. Every ticket contained a verification checklist. I retrieved all of them, elaborated a more extensive testing list with Claude’s help and my own considerations, and started the QA phase.

The checklist had about 500 checks to validate. The amount of bugs was surprisingly low. I took note of the kind of inconsistencies that appeared. Most of them could be added as project rules to avoid repeating the same issues in the future.

QA phase took about 4 days more, counting some spec changes made by stakeholders.

Afterthoughts

This is not a process for all projects. In this case, I know the codebase well, so I could easily spot problems in the approaches Claude suggested. Working on all the features as a whole has the benefit that both Claude and I worked with the full context of the requirements.

Sometimes, in iterative workflows, one feature leads you to a solution that later turns out to be subpar, and by then it’s often too late to refactor to a stronger one. In this case, the planning occurs all at once, avoiding situations like these.

Conclusion

I ended up very happy with this approach. We took advantage of speed gains without adding AI slop, and kept ourselves accountable for every line. I want to test it on future projects to see where it breaks.

As for integrating this into a team workflow, it depends on the people. If developers want to do the planning themselves (which is rewarding work, and I respect that), handing them a pre-built plan would feel demeaning. For engineers less familiar with the codebase, it could be a great onboarding mechanism, especially if the plan explains the architecture decisions, not just what to build. So it won’t be a universal fit, but for the right context it removes a real barrier to entry.

This process let us move fast, with control, and deliver an output that was working and already documented. This is what all software development management workflows promise, but usually the time constraint undermines the result. Now, we can have all of it in less time than before.

Reply to this post by email ↪