Navigate Ways of Working
Cheatsheet
Operating Principles
The non-obvious guidelines — 21 cards covering the principles people are most likely to get wrong or forget.
Operating Principles
The non-obvious guidelines. If something here surprises you, read the full document.
Process
Delivery
Quality
People
Boundaries
AI
Process proportionate to risk
A 2-day feature gets a ticket. A quarter-long initiative gets discovery, a PRD, a one-pager, and exec sign-off. Same principles, different rigour.
Same process for everythingThree tiers, escalate as needed
Every sprint produces a tangible output
An API in Postman, a schema running on test data, an architecture decision. Not necessarily a UI. But always something reviewable.
Invisible sprintsArtificial demos
4+ sprints = off-ramps
Decompose into independently shippable increments of 1–4 sprints. Each has its own go/no-go. You can stop after any increment without wasting the previous ones.
Monolithic plansSequential commitment
The middle is a conversation
Product owns the problem, engineering owns the solution. The space between is resolved through constant dialogue — not documents thrown over a wall.
HandoffsPMs writing tech specs
Engineering allocation is engineering’s
15–25% of capacity for tech debt, reliability, tooling. Head of Eng owns it. PMs have visibility but don’t approve or reject.
Asking permission for tech debtTransparency without control
PM’s most important job is saying no
Head of Product provides air cover. Stakeholders submit requests; PMs assess them. Nobody goes directly to developers. “Urgent” usually isn’t.
Devs fielding stakeholder requestsSingle intake through PM
No story points. No daily standups.
Size items as small/medium/large at refinement. Async updates by default — squads choose if they want a daily sync. Measure throughput and cycle time, not estimates.
Planning pokerMandatory ceremonies
No pre-merge gates by default
Code goes straight to main behind feature flags. CI does the gatekeeping. Post-merge review for learning, not approval. PRs only for shared APIs, migrations, security, new joiners.
PRs as bottleneckAutomated gates + post-merge learning
QA has a voice, not a veto
QA’s quality concerns are heard and documented. If overruled, the risk is acknowledged and logged. QA works in parallel with devs, not sequentially after them.
QA as gate at the endQA embedded throughout
Define rollback before release
Rollback criteria (“if error rate exceeds X”) are set before the flag flips, not during the incident. First line of defence is a flag toggle in seconds.
Deciding thresholds under pressurePre-agreed, automated
Metrics are diagnostic, not performance
Metrics tell you where to look, not what to conclude. Derived from tooling, never manual. Never measure individual velocity, hours worked, or sprint commitment accuracy.
League tablesTrends with context
Retro: max 2 actions. Same issue 3× = escalate.
Every retro action has a named owner and a date. Reviewed next retro. If the same issue persists across three retros, it goes to Head of Eng.
12 actions nobody does2 actions that actually happen
IC and management are parallel tracks
Staff Engineer and Head of Eng are peers in seniority and compensation. Switching tracks is not a demotion. Don’t promote the best coder into management.
Management is “up”Two directions, equal standing
Lead Dev success = squad output
Not their personal code. Growing developers is a first-class responsibility: fortnightly 1:1s, deliberate stretch assignments, feedback in the flow of work — not saved for reviews.
Best coder who also managesMultiplier through others
Promote when already operating at the next level
Not on a fixed cycle. Not aspirational. “Is this person consistently doing the job already?” If yes, promote. If not, define what’s needed and revisit in 3–6 months.
Annual promotion roundsWhen earned
AI raises the floor. Humans add judgment.
AI drafts, generates, and analyses. Humans review, refine, and own. Every role gets AI tooling as standard. No separate AI team — embedded in every squad.
AI as a separate initiativeAI as how we work
Clean up feature flags within 1 sprint
Fully rolled out or killed → remove the flag and dead code. Flags older than 60 days without a documented reason get escalated. Stale flags are tech debt that compounds.
Flags accumulating foreverMonthly hygiene report
Prototypes are for validation, not build
Interactive prototypes test ideas with users. Wireframes are the build handoff. Engineers build from the design system, not by recreating a polished prototype pixel-for-pixel.
Dev recreating a prototypeDev applying the design system
Tier 3 ideas go through the CTO first
Before any serious work — even framing — confirm strategic alignment. A quick conversation, not a presentation. Don’t spend 2 weeks on an idea the exec team already has other plans for.
PM discovers misalignment at go/no-go5-minute CTO check first
If a new joiner asks “how do I deploy?” — docs failed
Onboarding is the quality check. Zero to “can build and deploy” in under 2 hours. First real commit by day 2–3. Architecture overview is 1 page, not a wiki.
Elaborate wiki nobody readsMinimal docs that stay current
Only PM injects work mid-sprint
After consulting Lead Dev on capacity. Nobody goes directly to a dev. If unplanned work consistently exceeds 20% of capacity, fix the systemic cause — don’t normalise it.
CEO → devCEO → HoP → PM → sprint
For the full details behind these principles, see the Engineering Process.