The Cost of Fixing Your Tooling Just Hit Zero
Showing my latest improvements in AI usage with our CTO, I mentioned how I tested the new features I had made. I’m always concerned about tooling improvements and removing daily frictions. I showed him the kind of scripts I ask the AI to write, and it turns out he also uses it to generate them, but never for such use cases.
Those are mostly to use once and throw away, so I decided to take advantage of AI to generate them. They were not meant to be committed to the project, so I valued correctness over craftsmanship.
I want to share the ideas behind them, though I can’t disclose specifics. Anyway, the purpose of sharing them is not the specific use case I needed to fix, but to give you ideas and let you explore what is valuable for you.
Exploring external libraries
Checking documentation might give you basic knowledge of the possibilities or use cases they show, but in complex or large APIs it can get confusing, and you may hit undocumented limits when development starts with incorrect assumptions.
In my case, I needed to ensure that what we planned was possible, and understand how the product behaves in those cases. It was straightforward (a couple of prompts) to get a script that allowed me to perform all operations interactively, checking different endpoints to see how the platform handled everything.
In 30 minutes I was able to confirm all our assumptions were correct and we needed no changes to our spec.
Fast API testing
I created CRUD endpoints for some new models. They had multiple fields, and the behavior changed depending on the inputs. Endpoints weren’t connected to the frontend and I wanted to be sure implementation and behavior were correct before moving on. Testing could be cumbersome, but I used a prompt to get a script that allowed me to CURL those endpoints. Claude Code gave me a fancy script with a simple interface to interact with all of them, adding small nice features like defaults.
I could ensure all endpoints behaved as expected before moving on to further iterations.
Minimizing frictions
We have a platform that authenticates via OTP. This is an awful system to do QA, since you need to wait until an email arrives; and if the features revolve around login/logout it becomes a painful process (potentially making thorough QA unlikely due to the friction).
Here, I went a step further. I wanted a solution that:
- Was shareable with the rest of the team.
- Did not change anything on the project to avoid any risk of compromise.
- Was only targeted at the local environment (so testing in staging/production still requires the full workflow to avoid hiding issues).
In about 15 minutes I was able to build a Chrome extension that detected logins and signups and checked the OTP code against our database. The OTP code can be seen from Chrome directly and instantly, and it fills the form with a single button click.
All of a sudden, the login workflow went from anywhere between 30 seconds and 2 minutes of waiting to instant, removing the friction without touching or compromising our project. I immediately shared it with the rest of the team.
AI lowers the cost threshold for tooling, whether throwaway or reusable. Problems that weren’t worth solving before now are. The skill is developing the instinct to recognize when a script changes the shape of a problem, or when a friction is worth removing for the team. I wrote recently about this instinct.
I’ve been rearranging my tooling since I switched to terminal tools and Vim more than 10 years ago. Back then, fixing a plugin incompatibility could cost several afternoons. Today, AI handles it in minutes. The threshold for what’s worth fixing or building has shifted entirely.