Want a quick email when a new article is published? Subscribe
AI Development: The Engineer's Playbook
You’ve used AI to autocomplete a function. Maybe you’ve asked it to write a unit test or explain some legacy code. That’s useful, but it’s like using a Formula 1 car to do the school run. You’re barely touching what’s possible.
This article is about building a complete, working system with AI. Not getting code suggestions, but using it to spec, design, build, and test from start to finish. And doing it properly.
Pick the right project
Start with something from the internal tools backlog. The configuration editor that currently requires manual SQL. The admin dashboard that would save the ops team half a day a week. Something genuine, something useful, something where the stakes are low enough that you can experiment without pressure.
If your CTO or engineering lead has read the companion article, they may already have something in mind.
Don’t just prompt. Engineer.
Here’s where most people go wrong. They open their IDE, fire off a quick prompt, get some code back, and start editing it by hand. That’s the old workflow with a thin AI layer on top. It misses the point entirely.
Instead, treat this the way you’d treat any proper engineering project. Because that’s what it is.
Start with a brief. Brainstorm the product requirements with the LLM. Describe the problem you’re solving, who it’s for, what they need. Let it ask clarifying questions. Iterate. You’ll end up with a better PRD than most human-only efforts, because the AI will probe edge cases and assumptions you hadn’t considered.
Get mockups. Ask it to sketch wireframes or describe the UI in detail. Iterate on those. “Make the filter panel collapsible.” “Add a status indicator to each row.” This is cheap exploration that saves expensive rework later.
Write an architecture decision document. Before a line of code is written, discuss the tech stack, the data model, the API structure. This isn’t over-engineering. It’s the minimum you’d expect from a competent developer before they start building.
Spend real time in plan mode. This is worth emphasising. You could throw the whole brief at the AI and let it one-shot everything. It might even produce something passable. But you wouldn’t do that with a junior engineer. You’d review the plan, discuss the approach, flag potential issues before they become expensive to fix. AI is capable, tireless, and it can juggle more context than most humans, but it’s still junior in its judgement. Treat it accordingly. Build up a proper PRD and an architectural decision record before you let it write a line of production code.
Use the right tools
Don’t do this in your IDE. Use a standalone tool: Claude Code’s desktop app, the command line, or something equivalent. Something that can interact with your git repo, create branches, run your test suite, and manage the full development workflow.
Why? Because you want the AI operating as an engineer, not as an autocomplete engine. It should be committing code, running tests, fixing failures, and iterating. The same loop a developer follows, just faster.
Work conversationally
This is the hardest habit to break. When you see code you don’t like, your instinct will be to open the file and fix it yourself. Resist.
Instead, say what you want changed and why. “I don’t like the caching pattern you’ve used; suggest something better” is far more effective than “Rip out lines 25-67 and replace with this.” The first approach lets the AI understand your reasoning and apply it consistently across the codebase. The second turns you into the bottleneck.
You’re the architect, not the bricklayer. Review everything, understand everything, but direct the work through conversation.
Let it surprise you
One of the genuinely satisfying parts of this process is that AI will suggest features you didn’t ask for. Not hallucinated nonsense; practical improvements that a thoughtful developer would recommend if they had the headspace.
I recently watched someone build a configuration editor. They asked for a paginated list of configuration entries with the ability to edit values. The AI came back with that, plus filtering by category, input validation based on the data type of each field, and an audit log showing who changed what and when. All sensible. All things that would have ended up in a version 2.0 specification, if anyone had ever written one.
This happens routinely. Give the AI a clear enough brief and it fills in the gaps that time pressure usually forces you to skip.
Test everything
Have the AI write comprehensive tests. Unit tests for the business logic. Integration tests for the API layer. And then go further: full browser-based end-to-end tests, automated and repeatable. This is usually the part teams skimp on because writing tests is time-consuming and tedious. AI doesn’t get bored and it doesn’t cut corners.
Insist on it. This is good engineering practice whether AI is involved or not. The difference is that AI makes it practical to actually do it thoroughly.
Proper engineering, not a party trick
The thread running through all of this is that AI development should follow the same discipline as any good engineering project. Specifications. Design discussions. Architectural decisions. Comprehensive testing. The fact that an AI is writing the code doesn’t mean you skip the rigour. If anything, it means you can afford more rigour, because the expensive part of the process has got dramatically cheaper.
You wouldn’t hand a junior developer a vague brief and walk away for eight hours. You’d check in, review progress, course-correct early. Do the same here. The AI is an unpaid, tireless, endlessly multitasking junior with an excellent memory. But it’s still a junior. Your experience and judgement are what turn its output into something production-worthy.
The challenge
Pick something from the backlog. Give yourself a day, a full day, not a stolen hour between meetings. Follow proper engineering discipline: specs, designs, architectural discussions, comprehensive testing. Don’t edit a single line of code. Every change goes through conversation.
At the end of the day, you’ll either have a deployed, useful tool and a new understanding of what AI development actually means, or you’ll have strong, informed opinions about what doesn’t work yet. Both are valuable.
The only wasted day is the one where you didn’t try.
This article is part of a broader series on AI-assisted development. For the executive perspective on why internal tools make the perfect first AI project, see the companion article.
Share this article
Comments
Leave a Comment
All comments are moderated and may take a short while to appear.
Loading comments...