Part of The GreenBox Story -- a standalone reference for the full series.
Over nineteen posts, the GreenBox team went from building the wrong thing fast to running a 12,000-subscriber operation across three cities. Along the way they learned sixteen techniques for turning vague ideas into working software, understanding customers, scaling architecture, and keeping teams aligned. This is the one-page reference.
How to use this
The table is organised by the problem you’re facing, not the order the techniques appear in the series. If you’re wondering “which technique do I need?”, scan the left column. If you’re wondering “what does this technique actually produce?”, scan the right.
Every technique name links to the full post with the GreenBox team’s story of learning and applying it.
Discovery and domain understanding
| Context / Problem | Technique | Outcome / Output |
|---|---|---|
| The team doesn’t share a mental model of how the business works. Assumptions are invisible. | Event Storming | A shared timeline of domain events on a wall. Hotspots and unknowns made visible. Everyone sees the same picture. |
| Many possible features, no way to prioritise. Where does value flow and where are the bottlenecks? | Value Stream Mapping | A visual map showing which steps create value, which are waste, and where the team should focus first. |
| User stories are vague. “Subscribe to a produce box” could mean five different things to five people. | Example Mapping | Concrete, testable examples for each business rule, plus explicit unknowns (red cards) to resolve before building. |
| Cards on a table aren’t software. How do concrete examples become working, tested code? | BDD / Gherkin | Executable specifications that drive implementation. Examples become automated tests. LLMs accelerate the code. |
| Features are shipping but the business metric isn’t moving. No connection between work and goals. | Impact Mapping | A map connecting deliverables to impacts to actors to a measurable goal. Reveals which features actually matter. |
| Long backlog, no sense of the full user journey. Hard to plan a coherent release. | User Story Mapping | A visual timeline of the user’s journey, sliced into releasable increments that make sense end to end. |
Customer and business understanding
| Context / Problem | Technique | Outcome / Output |
|---|---|---|
| Churn is high and nobody knows why customers leave – or why they stay. | Jobs to Be Done | The actual job customers hire the product for. Clarity on what’s core to retention versus what’s a nice-to-have. |
| A key assumption turned out to be wrong. What else does the team believe that hasn’t been validated? | Assumption Mapping | A prioritised grid of assumptions by risk and evidence. Cheap experiments designed for the riskiest beliefs. |
| The product works but the unit economics don’t. Does the business model sustain itself at scale? | Business Model Canvas | A one-page view of all nine building blocks of the business, exposing dependencies and second-order effects. |
Architecture and scaling
| Context / Problem | Technique | Outcome / Output |
|---|---|---|
| The codebase is a monolith. A simple feature touches dozens of files because everything is tangled. | Domain-Driven Design | Bounded contexts with explicit language, loose coupling via events, and the ability to work independently. |
| Critical business logic lives in one person’s head. It can’t be delegated, taught, or scaled. | Decision Tables | A formal table of every condition combination and its correct outcome. Domain expertise made explicit and testable. |
| Decisions made months ago are forgotten. New developers guess at intent and guess wrong. | Architecture Decision Records | Short documents capturing what was decided, why, what alternatives were considered, and what the consequences are. |
| LLMs make building cheap, so every capability looks worth building. But should you build or buy? | Wardley Mapping | A strategic map showing which capabilities differentiate you (build) and which are commodity (buy). |
Team practices at scale
| Context / Problem | Technique | Outcome / Output |
|---|---|---|
| One developer uses an LLM solo. The code arrives complete but nobody else understands it. | Ensemble Programming | The whole team thinks while the LLM types. Problems caught in real time. Everyone understands the result. |
| The team applies every discovery technique to every story. Workshop fatigue is setting in. | Cynefin | A classification (Clear / Complicated / Complex / Chaotic) that determines the right level of discovery for each piece of work. |
| The LLM writes code that works and passes tests but doesn’t think about what could go wrong. | Threat Modelling (STRIDE) | A systematic checklist of security threats at system boundaries, with mitigations planned before code ships. |
Planning and delivery
| Context / Problem | Technique | Outcome / Output |
|---|---|---|
| Discovery is done but there’s no delivery rhythm. Work starts and finishes whenever. Nobody tracks progress. | Sprint Planning | Two-week iterations with sprint goals connected to the Impact Map. Daily standups. Sprint reviews. Progress visibility. |
| Too many insights, not enough prioritisation. The team knows what’s wrong but can’t decide what to fix first. | Now/Next/Later Roadmapping | A prioritised plan with quarterly themes and measurable outcomes. A story the board can understand. |
| Multiple squads keep surprising each other with dependencies. No coordination between sprint plans. | Quarterly Planning Day | Aligned themes per squad, mapped dependencies, sprint coordination between teams. |
| Great weekly habits but no shared strategy connecting them. Squads are locally excellent, globally confused. | The Planning Onion | Vision/Year/Quarter/Sprint/Day layers connected. Every sprint traces back to a strategic goal. |
For the full planning framework, see The Planning Onion: Every Layer in One Place.
Choosing the right technique
Not sure where to start? Two rules of thumb from the series:
If you adopt one thing, make it Example Mapping. It’s the shortest (twenty-five minutes), the most structured, and it produces the most immediately useful output. Do it before every story.
If you’re overwhelmed, use Cynefin to classify your work first. If three people agree in five minutes, it’s Clear – just build it. If an expert can figure it out with analysis, it’s Complicated – Example Map it. If nobody knows the answer, it’s Complex – experiment. If the building is on fire, it’s Chaotic – act.
The techniques compose. Event Storming maps the domain. Value Stream Mapping finds the bottlenecks. Example Mapping makes stories concrete. Impact Mapping connects features to goals. They’re not alternatives – they’re layers. Start with the one that matches your most pressing problem and add the others as you need them.
The full series
The GreenBox Story follows a produce-box startup from first idea to 12,000 subscribers across four series:
- From Chaos to Clarity – understanding the problem before building the solution
- Shipping What Matters – building the right things in the right order
- Finding the Fit – knowing what customers actually want
- Scaling the Machine – growing without breaking
- Working Together at Scale – keeping multiple teams aligned and effective
Or start from the beginning with The GreenBox Story.