Want a quick email when a new article is published? Subscribe
AI coding: Getting started (or getting serious)
This is part 5 of 5. Start at part 1.
There’s a pretty high chance your developers are already using AI coding tools, whether you’ve sanctioned it or not. The tools are freely available. They make work easier and faster. Telling skilled professionals not to use something that makes them more productive is a policy that tends to be observed in the breach.
This isn’t a criticism of your developers. It’s a recognition of reality. The question isn’t whether AI coding is happening in your organisation. The question is whether it’s happening intentionally, with appropriate process, or chaotically, in the gaps.

If you’re starting from zero
Some organisations have explicitly prohibited AI coding tools, often for understandable reasons: security concerns, IP worries, uncertainty about quality. If you’re in this position, the first step is accepting that prohibition is not a long-term strategy. Your competitors are using these tools. The productivity gap compounds. At some point, the risk of falling behind exceeds the risk of adoption.
Start with low-stakes projects. Internal tools are ideal: systems your own staff use, where the blast radius of failure is contained and the feedback loop is fast. A reporting dashboard. An internal workflow automation. Something useful but not mission-critical.
Use these projects to build institutional knowledge. What does good specification look like for AI-assisted development? What testing practices catch the problems that matter? How do you structure work so that verification doesn’t become a bottleneck? These are questions you need to answer for your context, and the only way to answer them is to do the work.
If you’re already using it informally
Many organisations are in a more messy position: AI tools are in use, but inconsistently. Some developers use them heavily. Others don’t. There’s no shared understanding of what good practice looks like, no consistent approach to testing or verification, no way to know whether the productivity gains are real or whether you’re accumulating hidden problems.
The priority here is to move from informal to intentional. That means establishing standards: what tools are sanctioned, what testing is required, what review process applies. It means measuring what’s happening: are you actually seeing productivity gains, and at what quality level. It means creating forums for sharing what works and what doesn’t.
This isn’t about control for its own sake. It’s about capturing the benefits reliably rather than sporadically.
The process investments
The previous articles have made this point repeatedly, but it bears restating: the value of AI coding is constrained by your process maturity. Weak specifications produce wrong code faster. Inadequate testing lets bugs through at higher volume. Poor requirements traceability makes verification impossible.
If you’re serious about AI-assisted development, invest in these areas:
- Requirements discipline. Can your team write specifications clear enough that an AI can implement them correctly? If not, they probably weren’t clear enough for human developers either, but the feedback loop was slower and the problems were easier to hide. And yes, AI Agents write exceptionally good user stories, product requirement definitions and requirements specifications.
- Testing infrastructure. Do you have automated tests at multiple levels? Can you run them quickly and reliably? Do they actually validate requirements, or just exercise code paths? AI makes it easy to generate tests, but generating the right tests still requires thought. Again, AI can write not just good tests, but the infrastructure to help you run them.
- Verification workflow. How do you confirm that what was built matches what was needed? This isn’t code review in the traditional sense. It’s acceptance testing, requirements traceability, user validation. The question is whether the output is correct, not whether the code is pretty.
The team implications
As covered in Article 3, the profile of a high-leverage AI-assisted developer is different from what you might have hired for previously. Breadth matters more. Specification and verification skills matter more. Raw coding speed matters less.
This has implications for hiring. You may need fewer people, but with different capabilities. The ability to decompose problems clearly, to think about architecture, to write meaningful tests, to understand users: these become more valuable relative to syntax fluency and typing speed.
It also has implications for developing your existing team. The developers who will thrive are those who build the judgment skills that AI can’t replace. Invest in training around requirements, testing, system design. Create opportunities for developers to work across the stack rather than specialising narrowly.
The competitive reality
This is not a technology you can wait out. The productivity differential is real and significant. Companies that figure out how to use AI coding effectively will build faster, experiment more, and adapt more quickly. Companies that delay will find themselves increasingly unable to compete on speed or cost. Every month matters. The gap compounds. The teams that are learning now are building institutional knowledge that will be hard to replicate later.
What this means for you
If you’re a CEO or CFO: ask what’s actually happening with AI coding in your organisation. If the answer is “nothing,” ask why, and whether that’s a deliberate strategy or an absence of strategy. If the answer is “informally,” ask what it would take to make it intentional.
If you’re on a board: this is worth a conversation at the next meeting. Not because AI is a magic solution, but because the productivity shift is significant enough to affect competitive position. Ask how the company is approaching it.
My next article has more steps that each type of reader might want to take.
If you’ve been sceptical: the scepticism was reasonable a year ago. The tools were rougher, the evidence thinner. That’s no longer the case. The question now is not whether this works, but whether you’re going to use it.
The door for early advantage is closing. The door for competitive necessity is opening. Better to walk through the first one than be pushed through the second.
Share this article
Comments
Leave a Comment
All comments are moderated and may take a short while to appear.
Loading comments...