Want a quick email when a new article is published? Subscribe

AI coding: Bonus. What you can do right now

6 min read 22 Jan 2026

If you've not seen it, you might want to read the series of articles that leads to these recommendations here.

AIagenticsoftware developmentcoding

The previous articles covered the what and the why. This is the now what: specific questions and next steps depending on your role. Click the section that applies to you.

If you’re a CEO

Your job isn’t to understand the technical details. It’s to ask the right questions and recognise whether the answers make sense. Questions to ask your CTO or most senior developer:

  • Are we using AI coding tools? If not, why not? If yes, how systematically?
  • What productivity impact have we seen, and how are we measuring it?
  • What would it take to double our current usage in the next quarter?
  • What process gaps would that expose?
  • Are we hiring for the right profile, given how these tools change what matters?
  • What are our competitors doing in this space, as far as we know?

What to listen for: vague resistance based on quality concerns is a yellow flag. Specific concerns about process maturity are a green flag, because they indicate someone who understands what’s actually required. Enthusiasm without any mention of verification or testing is a red flag.

If you’re a CFO

Your lens is economic. The promise of AI coding is significant cost reduction and faster delivery. Your job is to understand whether that’s actually materialising and how to account for it. Questions to ask:

  • What’s our current cost per feature or per sprint, and how has that changed in the last year?
  • If we’re seeing productivity gains, are they translating into either reduced headcount or increased output? Which do we want?
  • How should we be thinking about software capitalisation if the useful life of code is getting shorter?
  • What’s the investment required to adopt these tools properly: training, tooling, process development?
  • What’s the cost of not adopting, in terms of competitive position?

What to watch for: productivity claims without measurement. If someone tells you output has improved but can’t show you how they know, be sceptical. Also watch for hidden costs: if AI coding is generating technical problems that require cleanup later, the net benefit may be smaller than it appears.

If you’re a board member

You’re not going to direct the implementation. You’re going to ask whether the company has a coherent position and is executing on it. Questions for the next board meeting:

  • Does the company have a strategy for AI-assisted development, or is it happening ad hoc?
  • What’s the measured productivity impact so far?
  • How does this affect our competitive position relative to peers?
  • What risks are we managing, and how?
  • Is this topic getting appropriate executive attention, or is it being left to the engineering team to figure out?

What to look for: AI coding should be a strategic topic, not just a technical one. If it’s never been discussed at board level, that’s a gap. If it has been discussed but only in terms of risk and restriction, that’s also a gap.

If you’re a potential investor

During diligence, AI coding capability is increasingly a signal of operational maturity and future efficiency. Questions to ask:

  • How is the engineering team using AI coding tools?
  • What productivity improvements have been measured?
  • What’s the process around verification and quality control?
  • How is the team structured, and how has that changed in response to these tools?
  • What’s the company’s view on build versus buy, given the reduced cost of building?

What it tells you: a company that’s adopted AI coding thoughtfully, with measurement and process, is likely to have better engineering economics going forward. A company that’s banned it outright, or adopted it chaotically, is carrying either competitive risk or quality risk.

If you’re a CTO or senior developer

You’re the one who has to make this work. Here’s a practical starting point.

Important note: there are some product opinions in below, which you can ignore if you choose; they are what I have seen work really well after a lot of experimentation, but the important thing here is to make a deliberate and purposeful choice, and run with it.

  • Pick a methodology for directing your AI agent. Don’t just prompt it ad hoc. The BMAD method is worth looking at. It provides structure for how you specify work, how you verify output, and how you iterate. Whatever methodology you choose, document it and use it consistently. The quality of your results depends extremely heavily on the quality of your direction.
  • Pick a stack. Choose technologies that are widely used, so the AI has a wealth of good code to learn from. This matters more than choosing the latest and greatest, particularly when you’re not writing the code directly yourself. I use Next.js. Whatever you choose, write down your architectural decisions and give this document to the AI agent before it starts planning. It needs context to make good choices.
  • Pick a style library and design system. This helps consistency and prevents the agent from inventing its own solutions for every UI problem. Your app will look decent from the start instead of requiring design cleanup later. I tend to Tailwind and shadcn. You can add your own style guidelines on top.
  • Use a solid relational database and enforce type safety. I’ve had excellent results with Supabase for the database, Drizzle for the ORM, and Zod for validation. The type safety catches errors early and gives the AI clear contracts to work with.
  • Build a proper CI/CD pipeline. You’re going to be developing fast. Any friction in testing and deployment will slow you down and tempt you to skip verification steps. Go as standard as possible: GitHub and Docker are a solid foundation. Automate everything. If deployment isn’t a single command, fix that first (have AI help you).
  • Create a skeleton app generator. Have your AI agent build you a template project with all these components configured and working together. Every time you start something new, run the generator and you’re immediately productive with your full stack, your style system, your testing infrastructure, all ready to go.
  • Start with something real but bounded. Don’t try to rebuild your core platform as your first AI-assisted project. Pick an internal tool, a new feature with clear boundaries, something where you can learn the workflow without catastrophic downside.
  • Measure from the beginning. Track how long things take. Track rework. Track defects. You need data to know whether this is working and to make the case for further investment.

More

AI Coding Series Index

Share this article

Email LinkedIn X

Comments

Loading comments...

Leave a Comment

All comments are moderated and may take a short while to appear.