When Workshops Go Wrong

November 03, 2026 · 29 min read

Part of The GreenBox Story — a standalone reference for the full series.

Over the last few posts, I’ve walked through Event Storming, Value Stream Mapping, Example Mapping, Impact Mapping, and User Story Mapping using the GreenBox team as a running example. Each post showed the technique working. The team asked good questions, surfaced unknowns, and came out the other side with shared understanding.

Real life isn’t always that clean.

Discovery workshops involve putting people in a room and asking them to think out loud together about hard problems. Sometimes that goes brilliantly. Sometimes it goes sideways. The techniques are sound, but they involve humans, and humans bring politics, egos, fatigue, and assumptions into every room they enter.

I’ve facilitated dozens of these sessions across different organisations – from five-person startups to teams of fifty inside enterprise companies. The failure modes are remarkably consistent. The same things go wrong for the same reasons, regardless of industry, team size, or how many agile certifications people have on their LinkedIn profiles.

Here are the failure modes I see most often – illustrated with GreenBox scenarios that are entirely plausible.

None of these are hypothetical. I’ve changed the names and details, but every scenario below is something I’ve seen happen. Some of them I’ve caused myself, by facilitating badly or reading the room wrong. The best way to learn facilitation is to get it wrong and notice what happened.

The Event Storm where nobody argues

Picture this. The GreenBox team has booked the meeting room. The sticky notes are out. The markers are uncapped. Everyone’s ready to storm.

The team gathers around the wall. Maya starts writing sticky notes. She’s the founder. She knows the domain. She’s confident, articulate, and moves fast.

Tom picks up a marker and writes a note. It says roughly what Maya’s says. Priya does the same. Jas glances at Maya’s notes before writing her own. Sam contributes a few operational events that nobody disagrees with.

Forty-five minutes in, the wall looks great. A clean timeline, consistent language, no contradictions. Lee steps back and counts the pink hotspot stickies.

Zero.

That’s a problem.

No hotspots means nobody disagreed. Nobody disagreed because nobody challenged Maya. And nobody challenged Maya because she’s the founder, she clearly knows this stuff, and it feels rude to question her in her own company’s workshop.

The result is a beautiful wall of sticky notes that documents Maya’s mental model. Which is exactly what the team had before the workshop – the only difference is that it’s now on a wall instead of in Maya’s head. They haven’t built shared understanding. They’ve built a transcription.

This is the single most common failure mode in Event Storming, and it happens in nearly every organisation where there’s a clear power imbalance. It’s not malice. It’s human nature. People defer to authority, especially when the authority figure is passionate and knowledgeable.

You can spot it early if you watch the room. Are people writing their own notes independently, or waiting to see what the senior person writes first? Are sticky notes appearing all over the wall, or clustering around the same area? Is there a five-second pause after each note goes up where everyone checks whether the boss looks happy before continuing?

If you see these patterns, intervene immediately. Don’t wait until the end to notice there are no hotspots.

How to fix it:

The facilitator has to actively provoke disagreement. Not rudely – surgically.

“Tom, when you built the subscription system in week one, you assumed box contents were fixed. Walk me through that assumption. What did you think the flow looked like?”

“Priya, you had questions about farm commitment deadlines that nobody could answer. Put those up as hotspots right now.”

“Sam, you deal with customers. What’s the most common thing Maya tells you that surprises you?”

Pair people up and give them five minutes to find contradictions in the timeline. “Your job is to find something that doesn’t make sense. If you can’t find anything, look harder.”

If the wall still has no pink notes after this, something is wrong with the room dynamics, and no amount of sticky notes will fix that. You might need to run smaller sessions without the most senior person present, then bring the findings together.

There’s a subtler version of this problem too. Sometimes people do write their own notes, and the notes are genuinely different from the founder’s – but nobody notices the differences because everybody assumes they mean the same thing. Maya writes “Box Packed.” Tom writes “Order Fulfilled.” Are those the same event? Maybe. Maybe not. In Tom’s mind, fulfilment includes dispatch. In Maya’s, packing and dispatch are separate steps with different people responsible.

The facilitator’s job is to spot these near-misses and pull on the thread. “Maya, you said ‘Box Packed.’ Tom, you said ‘Order Fulfilled.’ Are those the same thing? Talk to each other.” That conversation might take thirty seconds. It might take fifteen minutes. Either way, it’s cheaper than discovering the mismatch in a code review three weeks later.

The Value Stream Map that reveals a political problem

This one’s more nuanced, and it’s the scenario that’s most likely to make a facilitator sweat.

The team maps the supply chain from farm to customer doorstep. They track each step, who does it, how long it takes, and where the delays live.

The biggest bottleneck jumps off the wall. Every Tuesday evening, Maya manually matches supply to demand. She reviews what each farm has available, checks subscriber counts and box sizes, works out the maths, makes substitution decisions, and produces a packing list. It takes her three to four hours. If she’s ill or travelling, it doesn’t happen. The whole operation stalls.

The obvious solution is automation. Tom sees it immediately. “We could build a matching algorithm. Farms submit availability by Monday, the system runs the match overnight, Maya reviews exceptions in the morning. Thirty minutes instead of four hours.”

Maya goes quiet. Then: “I don’t think you understand how nuanced the matching is. It’s not just maths. I know which farms have the best zucchini this time of year. I know that Mrs Patterson hates beetroot even though she hasn’t told us. I know when Dave’s optimistic about his yield and I should order less than he offers.”

She’s right – the matching involves genuine domain expertise. But she’s also protecting something. This is her favourite part of the job. It’s where she feels most useful, most connected to the business. Automating it feels like automating herself out of the equation.

And the team can see it. There’s an awkward pause. Tom glances at Priya. Jas studies her notebook. Nobody wants to be the person who says “Maya, you’re the bottleneck and you’re resisting the fix.”

This is political, and the Value Stream Map didn’t cause the politics – it revealed them. Which is exactly what it’s supposed to do.

How to fix it:

First, acknowledge the human side honestly. Maya’s resistance isn’t irrational. She’s protecting something meaningful to her. Steamrolling her with efficiency arguments will damage trust and probably fail anyway.

Second, reframe the automation. It doesn’t have to mean replacement. The tool could surface recommendations that Maya reviews and adjusts. It handles the arithmetic – aggregating supply, calculating shortfalls, flagging mismatches – and Maya makes the judgement calls. She keeps the parts that require expertise and taste. The system handles the parts that are just tedious maths.

“What if the system did the boring bit – the adding up and cross-referencing – and gave you a draft to work from? You’d still make the decisions. You’d just make them in thirty minutes instead of four hours.”

That’s a different conversation from “let’s automate your job.”

The map reveals the problem. The team navigates the solution with empathy. If you treat the Value Stream Map as a weapon – “look, the data says you’re the bottleneck” – you’ll win the argument and lose the person.

I’ve seen this pattern in larger organisations too. A Value Stream Map reveals that a particular team or individual is the bottleneck, and someone uses it to justify restructuring or headcount changes. The next time you try to run a Value Stream Mapping session, nobody wants to participate. They’ve learned that honesty gets punished.

Value Stream Maps work because people are willing to be honest about how things actually work, including the messy, inefficient, human parts. If you use that honesty against them, you’ll never get it again.

The Example Map with fifteen red cards

This is the scenario that looks like a disaster but is actually a success in disguise.

The team sits down to Example Map the story “Farm lists available produce.” It should be straightforward. They write the story on a yellow card, agree on the first rule – “farms submit availability weekly” – and start writing examples.

Then the questions start.

“How far in advance does a farm need to submit?” Nobody knows. Red card.

“What units? Kilograms? Items? Bunches? Does it vary by produce type?” Nobody’s sure. Red card.

“What happens if a farm submits availability and then changes their mind? Can they update? Until when?” Maya thinks they can update until Monday evening, but she hasn’t told anyone this. Red card.

“What if a farm lists something they’ve never listed before? Do we need to approve new produce types?” Nobody’s considered this. Red card.

“What about minimum quantities? Can a farm list two carrots?” Good question. Red card.

“What format do farms submit in? A web form? An email? A phone call to Maya?” This one sparks a side conversation about whether farms will actually use a web portal. Sam remembers Rachel’s satellite broadband from the Event Storm – “Took me twenty minutes to load the map to get here.” If the portal requires a stable internet connection to submit availability, Rachel can’t use it. The farm interface needs to work on unreliable connections: save progress, handle dropouts, maybe even work offline and sync when the signal comes back. Red card – and a design constraint that shapes the entire farm experience.

“What about organic certification? Does a farm need to prove it? Who checks?” Nobody’s thought about this at all. Red card.

Fifteen minutes in, the team has three green examples and fifteen red cards. The red cards aren’t being answered – they’re generating more red cards. Every question reveals two more questions behind it.

Jas looks at the board and says, “This story isn’t ready.”

She’s right. And this isn’t a failure of the technique. This is the technique working exactly as designed. Example Mapping exists to find out whether a story is understood well enough to build. This one isn’t. The red cards are telling the team precisely what they don’t know, which is enormously valuable information.

The real danger isn’t the red cards. It’s what happens next.

If the team says “we need to answer these questions before we can build this story,” they’re doing it right. Someone – probably Maya, with help from Dave and Rachel – goes away, makes decisions about availability deadlines and units and update policies, and the team reconvenes.

If the team says “we need to start this sprint, so let’s just build it and figure the details out later,” they’re back to week one. Building on guesses. Each red card becomes an assumption baked into the code. When the answers eventually come – and they will – the code has to change. Or worse, the code’s assumptions become the business rules by default, because nobody realises they were guesses.

Fifteen red cards is not a problem. Ignoring fifteen red cards is a catastrophe.

There’s a useful heuristic here. If an Example Mapping session produces more red cards than green examples, the story needs to go back to discovery. If it produces roughly equal numbers, you’re close – one more conversation might crack it. If green examples outnumber red cards three to one or better, you’re ready to build.

The team that respects the red cards will feel slower in the short term. They’ll have sprints where stories get sent back to discovery instead of picked up for development. Product managers will get twitchy. Stakeholders will ask why the velocity chart looks anaemic.

But the stories that do make it to development will be clean. Developers will know what they’re building. Testers will know what to test. The rework rate will drop. There will be fewer “oh wait, we didn’t think about that” moments halfway through implementation. Over a couple of months, the “slow” team will be delivering more working software than the team that pushed ahead on guesses.

The red cards are a gift. They’re the technique telling you “you’re not ready.” Listen to it.

The Impact Map that challenges the roadmap

Every team has a sacred cow. Impact Mapping has a habit of finding it.

The team builds an Impact Map starting from their current goal: reach two hundred paying subscribers within three months. They map the actors, the impacts they need from those actors, and the deliverables that might produce those impacts.

Tom has a feature he’s been excited about for weeks: a farm analytics dashboard. Farms would see data about their sales, seasonal trends, popular items. He’s convinced it’ll strengthen farm relationships and make GreenBox more attractive to suppliers. It’s also technically interesting – the kind of problem he enjoys solving.

The team maps it. The actor is the farm. The impact is… what, exactly? More farms sign up? Existing farms list more produce? Neither of those impacts connects directly to getting two hundred paying subscribers. Subscribers don’t see the farm dashboard. It doesn’t affect their signup decision or retention.

The map says the farm analytics dashboard doesn’t serve the current goal.

Tom is frustrated. “It’s important. Farms are the foundation of this business. If we don’t invest in farm tools, we’ll lose them.”

There’s an uncomfortable silence. Someone suggests moving on to the next feature. The tension sits in the room like a bad smell.

This is a pivotal moment. If the team handles it well, they’ll have an honest conversation about priorities and come out stronger. If they handle it badly, Tom will feel shut down and stop contributing ideas.

How to fix it:

The Impact Map is a conversation tool, not a verdict. It doesn’t say “never build the farm dashboard.” It says “this doesn’t serve the goal of reaching two hundred subscribers in three months.”

If Tom’s concern is farm retention, that might be a different goal entirely. The team could build a second Impact Map with “retain all current farm partners” as the goal, and the farm dashboard would likely connect beautifully.

The point is to make the trade-off visible. If the team builds the farm dashboard now, they’re investing time in farm retention instead of subscriber growth. That might be the right call – but it should be a conscious choice, not an accident.

What you must not do is use the Impact Map as a weapon. “The map says your feature doesn’t matter” is a terrible thing to say to a founder. “The map says this serves a different goal – shall we prioritise that goal?” is a useful conversation.

Maps don’t make decisions. People make decisions. The map makes sure they’re making them with their eyes open.

There’s a broader lesson here about pet features. Every organisation has them. A feature that someone senior wants built because they’re personally invested in it, even though it doesn’t obviously serve the current strategy. Impact Mapping doesn’t eliminate pet features – it makes the cost of building them visible. Sometimes, once the cost is visible, the senior person backs down. Sometimes they don’t, and the team builds it anyway. At least they know what they’re trading away.

The worst outcome is building a pet feature without realising it’s a pet feature. That’s how teams end up six months into a project wondering why their metrics haven’t moved.

The senior person who won’t participate

You’ll meet this person on almost every team. They’re usually experienced, usually good at their job, and usually right that most meetings are pointless.

Tom has been a developer for twelve years. He’s seen a lot of workshops. Brainstorming sessions that went nowhere. Planning poker that felt like theatre. Retrospectives where the same problems came up every fortnight and nothing changed. At his last company, the “agile transformation” involved two months of workshops that produced a Jira board identical to the one they already had. He left shortly afterwards.

When Jas suggests an Example Mapping session, Tom pushes back. “Just write the acceptance criteria in the ticket. I don’t need a workshop to understand a user story. Tell me what to build and I’ll build it.”

He’s not being difficult. He’s being efficient – or at least, he thinks he is. He’s learned through experience that most workshops are a waste of time. The problem is that he’s pattern-matching on the word “workshop” and rejecting the whole category.

And honestly? He’s earned that scepticism. Most organisations have put Tom through enough pointless meetings to justify a lifetime of resistance. The burden of proof is on the workshop, not on Tom.

How to fix it:

Don’t fight it. Don’t force attendance. Don’t make it a management issue.

I’ve watched managers drag reluctant developers into workshops. The developer sits in the corner, arms crossed, contributing nothing, radiating hostility. The rest of the team picks up on the energy and becomes cautious. The workshop is worse than if the sceptic hadn’t been there at all.

Forced attendance creates resentment and guarantees that even if Tom is physically in the room, he’s mentally checked out.

Instead, start small. Run one Example Mapping session without Tom. Make it twenty-five minutes. Produce a concrete output: a set of examples that Tom can review at his desk. When he reads them, he’ll almost certainly have questions or corrections. That’s the hook.

“Tom, we mapped out the subscription renewal story and found eight edge cases. Can you take a look at this? We think two of them might affect the payment integration.”

Now Tom is engaging with the output of the session. He’s contributing his expertise. He’s seeing the value without sitting through a workshop.

After a few rounds of this, Tom might start showing up. Or he might not – but he’ll be engaged with the results, which is better than nothing. It’s not as good as having him in the room – his experience and understanding don’t get contributed to the group’s thinking, and the team misses the challenges he’d raise in the moment. But it might be the best you can get without creating hostility, and sometimes you need to accept that.

The goal is shared understanding. Having Tom review the output rather than attend the session doesn’t fully achieve that – he’s consuming the understanding rather than helping build it, and the rest of the team doesn’t benefit from his perspective. But it’s a step in the right direction, and pushing harder risks losing him entirely.

Here’s what actually happened with Tom at GreenBox. One Thursday, he skipped an Example Mapping session to pick up Ava from school because Sarah had a work commitment. It was a real obligation – but it was also an excuse. He could have rearranged; he chose not to. When he read the session output later, he found an edge case in the payment retry logic that the team had missed and that would have been a bug in production. If he’d been in the room, he’d have caught it in the moment. He didn’t say anything about it. He just fixed it in his next PR and moved on. But the knowledge that his absence had a cost sat with him.

One more thing about sceptics: they’re often right about the problem, just wrong about the solution. Tom is right that most meetings are a waste of time. He’s right that a lot of “agile ceremonies” are theatre. His mistake is assuming that all collaborative techniques are the same. Your job isn’t to convince him he’s wrong about meetings. Your job is to show him that this particular technique produces something he values – clarity about what to build.

If you can’t show that in twenty-five minutes, maybe the technique isn’t as useful as you think it is.

The remote workshop

This is the one that affects almost every team in 2026. Very few teams are fully co-located any more.

The team is distributed. Tom works from his home office in Fremantle, Priya from her apartment in Melbourne, Maya from the GreenBox warehouse in Margaret River. Getting everyone in the same room costs a day of travel and a hotel bill.

They try running an Event Storm on Miro. It works, in the sense that events end up on a board and a timeline takes shape. But something is missing.

The side conversations don’t happen. In a physical room, Tom would lean over to Priya and say “that doesn’t sound right” while Maya is explaining something else. That quiet challenge would surface a hotspot. On Miro, everyone’s in a single audio channel. Side conversations mean talking over people. So the challenges don’t happen.

The spatial memory is gone. In a physical room, people remember that the tricky bit is “over there on the left, near the pink cluster.” On a Miro board, everything looks the same. You zoom in, you zoom out, you lose your place. The wall becomes a map you have to navigate rather than a landscape you inhabit.

The energy drops after an hour. Screens are tiring in a way that physical rooms aren’t. And then there’s Rachel. She tried to join a remote session from her farm, and her satellite connection dropped three times in the first twenty minutes. Sam ended up calling her on the phone and relaying her contributions manually – a workaround that technically worked but meant Rachel’s voice was filtered through someone else. The person with the most important farming perspective was the hardest to include digitally. “Digital first” has a cost, and the cost falls hardest on the voices you need most. People start checking email. Cameras go off. The facilitator can feel the room drifting but can’t make eye contact with the person who’s disengaged.

There’s also the tooling friction. In a physical room, you grab a sticky note and a marker. On Miro, you double-click to create a note, resize it, change the colour, type the text, realise you’ve accidentally moved someone else’s note, undo, try again. The tool gets between the person and the thought. By the time they’ve figured out the interface, they’ve forgotten what they were going to write.

This isn’t Miro’s fault specifically. Any digital tool adds friction that physical stickies don’t have. The best digital tools minimise that friction, but they can’t eliminate it.

How to handle it:

Accept that remote is about seventy percent as effective as in-person for these workshops. Plan accordingly.

Keep sessions to ninety minutes maximum. Shorter is better. Two sixty-minute sessions beat one two-hour session.

Take real breaks. Not “let’s pause for five minutes” while everyone checks Slack. Actual breaks where people leave their desks.

Have a dedicated facilitator who does nothing except keep energy up, call on quiet people, and manage the board. In person, a facilitator can do this casually. Remotely, it’s a full-time job.

Insist on cameras on. This sounds minor. It isn’t. When cameras are off, people disengage. And not just those who have the cameras off – when you’re the one talking to a grid of initials and hoping someone’s listening, that’s extremely dispiriting.

Use the biggest screen people have. A thirteen-inch laptop is miserable for collaborative workshops. If someone has an external monitor, now is the time to use it.

And plan for a follow-up. Whatever you think you agreed in the remote session, confirm it asynchronously afterwards. Things get lost on video calls that wouldn’t get lost in a room.

Some teams I’ve worked with do a hybrid approach: one or two in-person sessions for the big-picture stuff – Event Storming, Value Stream Mapping – and remote sessions for the more focused work like Example Mapping. That’s a reasonable trade-off if travel is expensive.

One thing that does work well remotely is Example Mapping, because it’s short, focused, and produces a concrete artefact. The session is twenty-five minutes. The output is a photograph of the cards (or a shared document with the same structure). It doesn’t rely on spatial memory or side conversations in the same way an Event Storm does.

If you’re a fully remote team, lean into the techniques that suit the medium. Don’t try to force a three-hour Event Storm over Zoom. You’ll exhaust everyone and produce mediocre results. Fly people in once a quarter for the big collaborative sessions, and use the remote time for the focused, structured ones.

And if your organisation won’t fund travel for workshops, point out how much it costs when three developers spend two weeks building the wrong thing. Three senior developers for two weeks is easily thirty to forty thousand dollars in salary alone, before you count the opportunity cost of what they could have been building instead. A couple of flights and a meeting room start to look like a bargain.

General advice

I’ve been describing specific failure modes, but there are patterns underneath them.

The technique is not the point. Shared understanding is the point. Event Storming, Example Mapping, Impact Mapping – these are tools. If a tool isn’t producing shared understanding in your context, with your people, change how you’re using it. Shorten the session. Change the facilitator. Try a different technique. The worst thing you can do is run a workshop by the book while everyone in the room is miserable and disengaged.

Silence is more dangerous than arguments. A loud, messy workshop where people disagree is almost always more productive than a quiet, polite one where everyone nods. Arguments surface misunderstandings. Silence hides them.

If your workshop is calm and everyone agrees, be suspicious. Either you’ve genuinely got alignment – which is rare and wonderful – or people are deferring to the loudest voice in the room. The facilitator’s most important job might be learning to tell the difference.

Not every problem needs a workshop. If three people can have a ten-minute conversation at a desk and walk away in agreement, do that. Workshops are for situations where the domain is complex, multiple perspectives matter, and you need a structured way to surface disagreements. Don’t workshop a problem that a conversation can solve. I’ve seen teams schedule a two-hour Event Storm for a feature that two people could have sketched on a napkin in ten minutes. That’s not rigour – it’s process addiction.

The facilitator matters more than the technique. A good facilitator running a mediocre technique will produce better results than a bad facilitator running a brilliant one. The facilitator’s job is to create safety, surface disagreements, keep things moving, and make sure the quiet people get heard. If you don’t have a good facilitator, invest in developing one. It’s the highest-leverage skill a team can have.

Workshops have a shelf life. The shared understanding you build in a workshop degrades over time. New team members join who weren’t in the room. Decisions get forgotten. Context gets lost. If a workshop produced important insights, write them down. Not a forty-page document – a one-page summary of the key decisions and open questions. Pin it where people can see it. Take photographs of the wall before you tear it down. Those photographs are worth more than any Confluence page.

Watch for workshop fatigue. If you run discovery workshops every week for two months, people stop taking them seriously. The first Event Storm is exciting. The eighth Example Map is a chore. Be deliberate about when you use these techniques and when a quick conversation will do. Discovery is a means, not a ritual.

Debrief after workshops, especially the bad ones. “What worked? What didn’t? What would we do differently?” Five minutes at the end of a session, while everything is fresh. The teams that get good at workshops are the ones that reflect on how they run them, not just on the outcomes. Your facilitation should evolve as your team evolves.

The biggest failure mode isn’t a bad workshop. It’s skipping discovery entirely. A messy, imperfect, slightly awkward Event Storm where half the team is disengaged still produces more shared understanding than opening Jira and typing stories based on a two-sentence conversation with the founder.

Even a bad workshop beats building on assumptions.

One more thing. Workshops surface problems – they don’t solve all of them. If the fundamental issue is that customers don’t want your product, no amount of Event Storming will fix that. If the team doesn’t trust each other, an Example Mapping session will be awkward and shallow. If you don’t have access to domain expertise, your Event Storm will map assumptions, not reality. These techniques are powerful, but they assume a minimum of goodwill, domain access, and a product worth building. When those prerequisites are missing, the workshops will reveal it – which is valuable – but the fix lives outside the workshop.

I’ll say that again, because it’s the most important thing in this post. The biggest risk in software development isn’t a bad workshop. It isn’t an awkward conversation about a pet feature. It isn’t a sceptical developer who doesn’t want to attend. It’s a team that skips discovery entirely because they think the problem is obvious and they should just start building.

None of these problems are reasons to stop doing discovery. They’re reasons to get better at it.

The GreenBox team’s discovery sessions weren’t perfect. Maya dominated some conversations. Tom was sceptical at first. The remote sessions lost energy. Some Example Maps produced more questions than answers.

But by the end of the process, five people who started with five different mental models of the business were working from the same picture. They knew what they didn’t know. They’d made conscious decisions about what to build first and what to defer. They’d surfaced political tensions before those tensions turned into code that couldn’t be changed.

Compare that to week three in the first post, when Tom had built a subscription system on wrong assumptions, Jas had designed a feature that didn’t match the business model, and Priya was blocked on questions nobody had thought to answer. The messy, imperfect workshops were still vastly better than no workshops at all.

That’s what discovery buys you. Not perfection – alignment.

If you’re facilitating a workshop for the first time, start with Example Mapping. It’s the shortest (twenty-five minutes), the most structured (four colours of card, one story, timer running), and it produces the most immediately useful output (concrete examples you can build from). You’ll build confidence quickly because the format constrains the chaos.

Don’t start with Event Storming. It’s the longest, the most open-ended, and the hardest to facilitate well. The room needs energy management, surgical provocation, and the confidence to redirect conversations that are going nowhere. Watch someone experienced run one first if you possibly can. Your first Event Storm will be messy regardless – but watching one first means you’ll know what ‘good messy’ looks like versus ‘this is going off the rails.’

Discovery doesn’t stop

The biggest misconception about the techniques in this series is that they’re a project kickoff activity. You Event Storm at the start, Value Stream Map once, and then you’re done with discovery forever.

That’s not how it works. Discovery is ongoing. The cadence changes, but the work never stops.

Example Mapping: before every story. This is the one you do most. It’s short, it’s focused, and it catches the assumptions that would otherwise become bugs. If your team adopts nothing else from this series, adopt this.

Impact Mapping: when the metrics plateau. The GreenBox team’s first Impact Map was built around reaching 200 subscribers. When they hit 150 and growth stalls, they’ll need another one – maybe with a different goal, like reducing churn below 5% or expanding to a second delivery area. The map is a living hypothesis, not a one-time plan.

Value Stream Mapping: when something feels slow. If the team notices that stories are taking longer to ship, or that a particular part of the process is consistently painful, map it again. The bottleneck has probably moved. What was fast three months ago might be the constraint now.

Event Storming: when you’re entering new territory. The GreenBox team Event Stormed the subscription and supply domains. When they eventually tackle wholesale customers, or meal kits, or a second geographic region, they’ll need to Event Storm again – because those are new domains with new assumptions.

User Story Mapping: when you’re planning a release. Revisit the story map before each major release to check: are we shipping something coherent? Are there gaps? Has the user journey changed since we last looked?

The rhythm isn’t fixed. Some weeks you’ll do nothing but Example Mapping and coding. Some quarters you’ll step back and Impact Map, Value Stream Map, and re-plan the release. The techniques are tools, not rituals. Reach for them when you need them.

How this fits with what you already do

If your team already runs sprints, standups, and refinement sessions, you don’t need to add more ceremonies. You’re replacing the ones that aren’t working.

Sprint refinement → Example Mapping. Most refinement sessions involve someone reading a story aloud and the team vaguely nodding. Replace that with a twenty-five-minute Example Mapping session. You’ll come out with concrete examples instead of vague acceptance criteria.

Quarterly planning → Impact Mapping. Instead of arguing about which features to prioritise based on gut feel, start from the business goal and work backwards. The Impact Map gives you a defensible answer to “why are we building this?”

“Let’s sort the backlog” → User Story Mapping. Instead of dragging cards up and down a Jira board, lay out the user journey and slice it into releases. You’ll see gaps that no amount of backlog grooming would surface.

Project kickoff → Event Storming. Instead of a two-hour meeting where a product manager presents slides and everyone nods, cover a wall in sticky notes and find out what you don’t agree on. Two hours of Event Storming replaces weeks of discovering misunderstandings in code review.

You’re not adding process. You’re upgrading the process you already have. The standup stays. The sprint stays. The Jira board stays. But the conversations that feed into them get sharper.

What’s next

The GreenBox team went from building the wrong thing fast to building the right thing deliberately. They’ve got Event Storming for shared understanding, Value Stream Mapping for focus, Example Mapping for concrete stories, Impact Mapping for connecting work to goals, User Story Mapping for seeing the whole journey, and a sprint cadence that turned sticky notes into working software.

The discovery toolkit is working. The team is shipping. They hit 214 subscribers by the deadline.

  • The GreenBox Cheat Sheet — every technique in one place
  • The Planning Onion — every planning layer in one place
  • The GreenBox Story — the full series
Questions or thoughts? Get in touch.