Cynefin: Not Everything Needs a Workshop

September 22, 2026 · 26 min read

GreenBox delivers weekly produce boxes to 5,000+ subscribers across two cities, with three squads and twenty-five people. The discovery techniques that saved them early on are now being applied to everything – including stories where everyone already knows the answer. Workshop fatigue is setting in.

Anika sends Charlotte a message on a Tuesday morning. It’s long, by Anika’s standards. She’s usually three sentences.

“The Melbourne squad just spent 25 minutes Example Mapping ‘update customer email address.’ No red cards. Two examples. One rule: the email has to be valid. I watched five adults sit in a room with coloured cards and a timer to collectively arrive at the conclusion that an email address has to have an @ sign. Everyone knew the answer before we started. I knew the answer before we started. Liam knew the answer before we started. The intern knew the answer before we started.”

Charlotte replies: “How’s the team feeling about workshops generally?”

“Honestly? Burned out. We’ve been running Example Mapping for every story for three months. The discipline is good when the story is complex. But most of our stories aren’t complex any more. We’ve mapped this domain to death. Half the time we’re going through the motions. People are polite about it. They show up, they do the cards, they leave. But the energy is gone. It’s like watching people who used to love cooking go through the motions of making dinner because the recipe says they have to.”

Charlotte has heard the same thing from the Perth squad. Tom mentioned last week that the Tuesday Example Mapping slot feels like a ritual rather than a tool. “We map stories where everyone already knows the answer,” he said. “Then we skip discovery for the stuff that actually needs it because nobody has the energy left.”

Meanwhile, GreenBox is about to expand to Brisbane. New city, new farms, new logistics partners, different climate zones, different customer demographics. Nobody on the team has done this before. It’s genuinely complex work that needs deep discovery – JTBD interviews, assumption mapping, maybe even a fresh Event Storm for the Brisbane supply chain.

But nobody’s doing it. The team’s discovery budget – both calendar time and mental energy – is being spent on 25-minute Example Mapping sessions for stories that don’t need them.

The wrong tool for the job

This is a problem that Charlotte has seen before at scale. When a team first discovers workshop techniques, they tend to under-use them – skipping discovery and building on assumptions. Then there’s a correction period where they use them for everything. That over-correction feels responsible. It feels disciplined. But it leads to the same place: wasted effort. Just a different kind of waste.

The GreenBox teams aren’t wasting effort by building the wrong thing any more. They’re wasting effort by over-analysing the right thing. The cost isn’t as dramatic – nobody’s throwing away a month of code. But the cumulative drain on energy and engagement is real. Workshop fatigue leads to going through the motions, which leads to missing the workshops that actually matter.

Charlotte needs a way for the teams to quickly assess how much discovery a piece of work actually needs. Not everything deserves a workshop. Not everything deserves a five-minute conversation either. The question is: how do you tell the difference?

Lee is less surprised than Charlotte when she describes the problem. “Every team I’ve coached goes through this phase,” he says. “They learn the techniques, they get excited, they apply them everywhere. Then they burn out. The solution isn’t to stop doing discovery – it’s to get better at choosing when to do it. There’s a framework for that.”

Dave Snowden’s framework

Charlotte introduces the Cynefin framework at the next all-hands. Cynefin (pronounced ku-NEV-in, it’s Welsh) was created by Dave Snowden. It’s a sense-making framework – a way to look at a problem and determine what kind of problem it is, which then tells you what kind of approach to use.

There are four domains.

Clear (sometimes called Obvious). The relationship between cause and effect is obvious to everyone. Best practice exists. You don’t need to analyse or experiment – you just need to do the known right thing.

Updating a customer’s email address is Clear. There’s one right way to do it. The rules are known. The implementation is straightforward. If three people can agree on the answer in a five-minute conversation, it’s probably Clear.

Complicated. The relationship between cause and effect exists, but it requires expertise to see it. The answer is knowable, but you need to do some analysis to find it. Good practice exists, but you need to choose the right one for your context.

Building the allergen substitution filter is Complicated. The rules are discoverable – Maya knows them, the food safety standards define them – but there are enough interacting conditions that you need structured analysis to get it right. Example Mapping is designed for this domain. You bring the right people together, surface the rules and edge cases, and come out with concrete examples.

Complex. The relationship between cause and effect can only be seen in retrospect. You can’t analyse your way to the answer because the answer doesn’t exist yet – it emerges from trying things. The right approach is to probe, sense, and respond: run safe-to-fail experiments and see what happens.

Expanding to Brisbane is Complex. Nobody knows whether Brisbane customers want the same box sizes as Perth customers. Nobody knows which local farms will be reliable. Nobody knows whether the delivery logistics that work in Perth will work in Brisbane’s geography. You can’t Example Map your way to answers because the questions haven’t been asked yet. You need to run experiments: a pilot programme, customer interviews in the Brisbane market, conversations with local farms.

Chaotic. There’s no relationship between cause and effect that anyone can perceive. The priority is to act first, then sense what happened, then respond. You stabilise the situation before you analyse it.

A production incident where the payment system is down and customers are being double-charged is Chaotic. You don’t schedule an Example Mapping session. You don’t run a discovery workshop. You fix the immediate problem, then you figure out what happened.

Complex Probe, sense, respond
  • Brisbane expansion
Complicated Sense, analyse, respond
  • Allergen filter
  • GDPR compliance
Chaotic Act, sense, respond
  • Payment outage
Clear Sense, categorise, respond
  • Update email address

Mapping the backlog

Charlotte asks each squad to take their current backlog and sort every item into one of the four domains. Not precisely – roughly. A five-minute exercise per squad.

The results are eye-opening.

The Perth squad has 34 stories in their backlog. When they sort them:

  • Clear: 21 stories (62%)
  • Complicated: 8 stories (23%)
  • Complex: 4 stories (12%)
  • Chaotic: 1 (a lingering production issue they haven’t resolved)

Sixty-two per cent of their stories are Clear. Stories like “add a phone number field to the customer profile,” “update the farm payment report to include GST,” “fix the timezone display on the delivery tracker.” Each one had been getting a 25-minute Example Mapping session. Each session confirmed what everyone already knew.

They’ve been running Example Mapping sessions for all of them. That’s roughly ten hours a month of workshops for work that could have been handled with a five-minute chat.

Melbourne’s numbers are similar. The remote squad, which handles more greenfield work, has a higher proportion of Complex stories – but even they have a majority of Clear work.

The pattern is obvious once you see it. GreenBox has been operating for over a year. The core domain is well-understood. Most of the day-to-day work is maintenance, incremental improvements, and features in well-mapped territory. The teams have internalised the domain knowledge through months of Event Storms, Example Maps, and delivery. The discovery techniques that were essential in month one are overkill for most of the work in month fourteen.

This is actually a sign of success. If most of your work is Clear, it means your team understands the domain deeply. The Event Storms, Example Maps, and decision tables did their job – they transferred knowledge from the domain experts to the whole team. The irony is that the better your discovery process works, the less often you need to use it for routine work. The team has graduated from needing structured discovery for everything to needing it selectively, for the genuinely hard stuff.

Charlotte’s rule of thumb

Charlotte boils it down to four sentences that she prints and pins on the wall of each squad’s area:

If three people agree in a five-minute conversation, it’s Clear. Don’t workshop it.

If an expert can figure it out with some research, it’s Complicated. Example Map it.

If nobody knows the answer and you need to try something to learn, it’s Complex. Experiment.

If the building is on fire, it’s Chaotic. Act.

The adjustment

The teams adjust their process. The workshop calendar gets lighter immediately.

Clear stories get a quick conversation during planning. The developer picks it up, builds it, gets it reviewed. No workshop. No formal discovery. If questions arise during implementation, the developer asks someone – but that’s a conversation, not a ceremony.

This is a bigger deal than it sounds. The GreenBox teams had been treating every story the same way: plan it, Example Map it, build it, ship it. The Example Mapping step was adding 25 minutes per story, and with each squad handling four to six stories a week, that’s two to three hours of workshops. For Clear stories – the ones where everyone already knows the answer – those hours produced nothing new. The team went in, confirmed what they already knew, and came out with exactly the understanding they started with. That’s not discovery. That’s ceremony.

Complicated stories keep the Example Mapping treatment. These are the stories where the rules are knowable but non-obvious, where multiple conditions interact, where domain expertise matters. The allergen filter. The seasonal substitution logic. The payment retry policy. These benefit from getting the right people in a room with coloured cards and a timer.

The team notices something: the Complicated sessions get better when they’re not diluted by Clear ones. When every session matters, people show up with more energy and more preparation. The cards are sharper. The questions are better. The red cards that surface are genuinely surprising, not just confirming what everyone suspected.

Complex challenges get the full discovery treatment – but the right discovery treatment. JTBD interviews for understanding customer behaviour. Assumption mapping for identifying what the team believes but hasn’t validated. Safe-to-fail experiments for testing hypotheses. The Brisbane expansion gets a dedicated discovery track with its own cadence, separate from the sprint rhythm.

Chaotic situations get incident response, not workshops. Fix the immediate problem, then schedule a post-incident review to understand what happened and prevent recurrence. The post-incident review often reclassifies the problem: what was Chaotic in the moment (the payment system is down, act now) becomes Complicated in retrospect (why did it go down, and what do we change to prevent it?).

Charlotte is careful to note what Cynefin doesn’t say. It doesn’t say “Clear stories don’t matter.” It doesn’t say “skip quality for routine work.” The developer still writes tests. The code still gets reviewed. The deployment still follows the standard pipeline. What changes is the front-end: how much structured discovery happens before the developer starts building. For Clear work, the answer is “a five-minute conversation.” For Complicated work, “a 25-minute Example Mapping session.” For Complex work, “a series of experiments over multiple weeks.” The downstream quality processes stay the same regardless.

This distinction matters because some team members initially interpret Cynefin as permission to cut corners. Ravi, during the first week, picks up a Clear story and ships it without any code review. “You said don’t over-engineer it,” he tells Charlotte. She corrects him: “Cynefin is about discovery process, not delivery process. Discovery for Clear work is minimal. Delivery standards don’t change.”

The result: the workshop calendar drops by roughly 60%. The Example Mapping sessions that remain are for stories that genuinely need them. The energy the teams were spending on going-through-the-motions workshops is redirected to the Complex work that actually needs deep thinking.

Anika messages Charlotte after the first week. “The Melbourne squad ran one Example Mapping session this week instead of four. It was the session for the farm contract renewal flow. It was a good one – we found six edge cases nobody had thought about. The other three stories were Clear and we just built them. The team’s energy is completely different.”

Tom puts it more bluntly at the Perth retro. “For the first time in months, I don’t dread Wednesdays. The sessions we run now are actually useful. Before, I was sitting through 25 minutes of people agreeing with each other so that we could feel responsible about process. Now I sit through 25 minutes of genuine discovery and come out knowing something I didn’t know before.”

The Brisbane experiment

The most immediate payoff is the Brisbane expansion. Now that the teams aren’t spending all their discovery energy on Clear work, Charlotte and Lee carve out a proper Complex discovery track.

Lee runs a round of JTBD interviews with potential Brisbane customers. The findings challenge assumptions: Brisbane customers care more about organic certification than Perth customers (Perth cares more about “local”). The box sizes that work in Perth might not work in Brisbane – average household size is different, and Brisbane has a larger proportion of single-person households who want a smaller, cheaper box.

One Tuesday interview catches something else. A Brisbane prospect – a woman named Jen, late twenties, lives alone in a townhouse in New Farm – mentions she already gets a produce box from “a Sydney company” but she’s unhappy because the boxes are too large for one person and half the produce goes to waste. “I end up composting the kale every week,” she says. “I feel guilty about it, but there’s only so much kale one person can eat.”

The interviewer asks which company. Jen says the name: Freshly. Charlotte, reading the interview summary that afternoon, goes still. Freshly is expanding into Brisbane. They’re already there. GreenBox’s discovery cadence has just caught something that would have been invisible for weeks otherwise.

None of that would have come out of an Example Mapping session. Example Mapping assumes you know the rules and need to make them concrete. JTBD interviews are for when you don’t even know the rules yet.

The Brisbane team designs a series of safe-to-fail experiments: a four-week pilot with a smaller box option, partnerships with two local farms, and a simplified logistics model. If the pilot works, they’ll scale. If it doesn’t, they’ll learn and adjust. That’s the probe-sense-respond cycle that Complex work requires.

But the first experiment produces contradictory data. The “friends and family” pilot with twenty Brisbane households comes back with feedback that makes no sense at first glance. Half the participants say the boxes are too large. Half say they’re too small. The team stares at the numbers in a Wednesday morning meeting and nobody can reconcile them.

Charlotte looks at the data for a long time. Then she says: “This might not be Complex in the way we thought. We’re treating one customer segment as one thing. But we’re lumping singles and families together.”

She pulls the feedback apart by household size. The pattern is immediate. Every single-person household said the boxes were too large. Every family said they were too small. There’s no contradiction – there are two different customer segments with opposite needs, and the team had been averaging them into nonsense.

The Cynefin classification was right – Brisbane expansion is Complex, requiring experiments. But the experiment design was wrong. The team had designed for one customer type. Brisbane has at least two, and they want different things. The framework pointed them to the right domain, but the execution needed refinement. Frameworks tell you which direction to look, not what you’ll see.

“Safe-to-fail” is the key phrase. In the Complex domain, you design experiments that are small enough to survive failure. The Brisbane pilot costs two weeks of one squad’s time and partnerships with two farms. If it fails – if Brisbane customers don’t want the product, or the logistics don’t work, or the farms can’t deliver reliably – GreenBox loses two weeks of effort, not six months. The learning is worth more than the cost. Compare that to the alternative: spend six months building a full Brisbane operation on untested assumptions, launch, and discover that single-person households wanted a box half the size of what you built.

Charlotte marks the Brisbane backlog items differently in the project tracker – a yellow tag for Complex work, to remind the team that these items need experiments, not workshops. It’s a small thing, but it prevents the default behaviour of treating every new story the same way.

Cynefin and LLMs

There’s a natural connection between Cynefin and how the team uses LLMs, and Charlotte makes it explicit.

For Clear work, the LLM is a solo implementation partner. One developer, one LLM session, straightforward instructions. “Add a field to the customer profile for email preferences.” The LLM generates the migration, the form field, the validation. Done.

For Complicated work, the LLM is part of the ensemble. The team Example Maps the story, then uses ensemble programming with the LLM to implement it. The LLM types while the team navigates. Multiple perspectives catch the edge cases and domain nuances that a solo session would miss.

For Complex work, the LLM is a research assistant. It transcribes JTBD interviews, synthesises customer feedback, drafts hypotheses, and helps design experiments. It can’t tell you what Brisbane customers want – that’s unknowable without talking to them. But it can help you structure the research, spot patterns across interviews, and draft the questions for the next conversation.

For Chaotic work, the LLM is a rapid responder. When the payment system is down, the LLM helps debug by analysing error logs, suggesting root causes, and generating fixes. Speed matters more than thoroughness. You fix first and understand later.

The classification of the work determines not just which discovery technique to use, but how to use the LLM. That’s a more nuanced relationship with AI tooling than “use the LLM for everything” or “don’t trust the LLM.” The right question isn’t whether to use the LLM – it’s how to use it for this type of problem.

Domain Discovery approach LLM role Example
Clear Quick chat Solo implementation partner Update email address
Complicated Example Mapping Ensemble navigator tool Allergen substitution filter
Complex Experiments, JTBD Research assistant, synthesis Brisbane expansion
Chaotic Incident response Rapid debugging partner Payment outage

The table looks simple. Getting the team to use it consistently takes practice. The default behaviour – treating everything the same way – is deeply ingrained. It takes about four weeks of deliberate classification at the start of each planning session before it becomes habit. After that, the squads start classifying instinctively. Someone writes a story and immediately says, “This is Clear, just build it.” Or “This is Complicated – we should Example Map it Wednesday.” The framework becomes vocabulary rather than process.

The mis-classification risk

Cynefin has one major failure mode, and it hits the GreenBox team within a fortnight.

Ravi classifies a story as Clear: “Implement GDPR compliance for EU customers.” His reasoning: GDPR is a regulation. The rules are written down. Just follow them.

Charlotte pushes back hard. “Which data? We store customer addresses, payment tokens, dietary preferences, allergen profiles, delivery history, and feedback comments. Which of those are personal data under GDPR? All of them? The dietary preferences? What about aggregated data – if we report that 60% of Brisbane customers prefer organic, is that personal data?”

Ravi starts to see the problem.

“And which rules?” Charlotte continues. “Right to erasure – but we have legal obligations to keep financial records for seven years. Right to data portability – in what format? Right to object to processing – but the subscription literally requires processing their data to deliver a box. Where are the boundaries?”

Ravi recategorises: Complicated, at minimum. Possibly Complex for the parts where legal interpretation is uncertain and they’ll need to experiment with different approaches.

The mis-classification risk is the main danger with Cynefin. If you classify something as Clear when it’s actually Complicated, you skip discovery and build on assumptions – the original GreenBox failure mode from month one. If you classify something as Complicated when it’s actually Complex, you run workshops looking for answers that don’t exist yet, and the workshops feel frustrating and unproductive because the technique doesn’t match the domain.

Charlotte establishes a check. During planning, after the squad has classified each story, she asks: “What would have to be true for this to be in a different domain?” It’s a forcing question. If the team can’t articulate why a story is Clear rather than Complicated, it probably isn’t Clear.

The other common mis-classification goes the other direction: treating Complicated work as Complex. This wastes time in a different way. If the answer is knowable through analysis – if an expert could figure it out by examining the rules and edge cases – you don’t need an experiment. You need an Example Mapping session. Running experiments for Complicated problems is like conducting a clinical trial to find out whether water is wet. The answer is there. You just need to look.

Kai suggests another heuristic: “If the LLM can implement it from a one-paragraph description without asking any clarifying questions, it’s probably Clear. If the LLM asks questions, it’s at least Complicated. If the LLM’s questions reveal that we don’t know the answers, it’s Complex.”

It’s a neat trick. The LLM becomes a litmus test for problem complexity. Not because the LLM understands the domain – but because its need for specificity reveals how much ambiguity is hiding in the requirement. Charlotte tests it by feeding the GDPR story to an LLM: “Implement GDPR compliance for our produce box subscription service.” The LLM immediately asks twelve clarifying questions. Ravi looks at the list and laughs. “OK, not Clear.”

Disorder: the hidden fifth domain

There’s a part of Cynefin that Charlotte doesn’t mention at the all-hands but discusses with the squad leads privately: the fifth domain, Disorder.

Disorder isn’t a type of problem. It’s the state of not knowing which domain you’re in. It’s the default state for most work before someone thinks about it. And it’s dangerous because people in Disorder tend to approach problems using whatever method they’re most comfortable with, regardless of whether it fits.

Tom, who is comfortable with solo LLM sessions, defaults to treating everything as Clear: just build it, figure it out as you go. Maya, who loves domain workshops, defaults to treating everything as Complicated: let’s map it, let’s Example Map it, let’s bring everyone together. Lee, who does deep discovery research, defaults to treating things as Complex: let’s interview customers, let’s run experiments.

None of them are wrong about their preferred approach. They’re wrong about when to use it. Cynefin gives the team a shared language for matching the approach to the problem instead of matching the problem to whatever approach feels most natural.

The classification step – sorting work into the four domains before deciding how to approach it – is the moment the team moves from Disorder into one of the four domains. It only takes five minutes. But those five minutes prevent hours of misapplied effort.

The compounding effect

Three months after adopting Cynefin, Charlotte reviews the metrics.

The number of formal discovery sessions per month has dropped by 60%. But the quality of those sessions has gone up. The stories that do get Example Mapped are genuinely Complicated – the sessions are faster, more focused, and produce more useful output because the team isn’t fatigued from workshopping trivial stories.

The Complex work – Brisbane expansion, a new B2B meal kit offering, partnerships with restaurant chains – is getting proper attention for the first time. Lee is running fortnightly JTBD interviews. The teams are designing experiments instead of building features and hoping. The hit rate on new initiatives is improving because the team is probing before committing.

And the Clear work is flowing faster than ever. Without the overhead of unnecessary workshops, straightforward stories move from planning to production in hours rather than days. The delivery cadence has tightened without anyone feeling rushed.

Tom, who was initially sceptical about yet another framework, becomes one of its biggest advocates. “Cynefin gave me permission to just build things when the answer is obvious,” he says. “And it also told me to slow down when the answer isn’t obvious. I used to feel guilty about skipping discovery. Now I know when to skip it and when to lean in.”

The remote squad reports the most dramatic change. Before Cynefin, they were spending half their workshop time on Clear stories that everyone understood. After Cynefin, that time is redirected to the Brisbane expansion work – the genuinely Complex stuff that needs JTBD interviews and experiments. “We’re doing less discovery overall,” says their squad lead, “but the discovery we’re doing is ten times more valuable.”

The productivity numbers back this up. Across all three squads, the average lead time for Clear stories drops from three days to one and a half – because the stories aren’t waiting for a workshop slot. The quality of Complicated sessions improves – fewer sessions produce zero red cards, because the team only runs sessions when they expect to find something new. And the Complex work, which had been languishing in the backlog because nobody had energy for it, finally gets proper attention.

Maya puts it differently. “We spent a year learning how to do discovery. Cynefin taught us when to do it. That’s just as important.”

The domain shift

There’s a subtlety to Cynefin that Charlotte explains to the squad leads over lunch one day. Problems don’t stay in one domain forever. They move.

When GreenBox first built the subscription system, it was Complicated. Multiple rules, interacting conditions, domain expertise required. Now, fourteen months later, it’s Clear. The rules are known, the edge cases are mapped, the team has built it and maintained it and understood it thoroughly. A new story in the subscription domain is almost certainly Clear – the team has done the discovery already.

But when GreenBox expands to Brisbane, the subscription system becomes Complex again in that context. Different customer demographics, different pricing expectations, different competitors. The feature might be technically the same code, but the business questions around it are new.

This is why classification needs to happen per story, not per domain. The subscription domain isn’t universally Clear. It’s Clear in Perth, where the team has deep understanding. It’s Complex in Brisbane, where they don’t. The domain label depends on how much the team understands, not on how complex the code is.

Ravi spots a real-world example. “Delivery scheduling in Perth is Clear. We’ve been doing it for a year, the logistics are mapped, the courier routes are optimised. But delivery scheduling in Brisbane is Complicated – we need to figure out the route optimisation for a different geography with different traffic patterns and different courier partners. Same feature, different Cynefin domain.”

That’s exactly right. And it means the classification conversation is never done. As the team learns more, Complex work becomes Complicated, and Complicated work becomes Clear. As the team enters new territory, Clear work becomes Complex again. The domains are relative to the team’s current understanding, not absolute properties of the work itself.

The meta-lesson

The framework doesn’t replace any of the discovery techniques the team has learned. It sits above them, as a meta-tool for choosing which tool to reach for. Event Storming for exploring new domains. Example Mapping for making complicated stories concrete. JTBD and experimentation for complex challenges. And a five-minute conversation for the stuff that’s just obvious.

Lee, hearing about the Cynefin adoption from Charlotte, makes an observation. “This is the maturity curve. In the beginning, teams skip all discovery. Then they discover workshops and do too much. Then they learn to match the approach to the problem. That last step is where most teams plateau – they get stuck in ‘workshop everything’ mode because it feels safer than making a judgement call about when not to. Cynefin gives them permission to make that call.”

He pauses. “The irony is that Cynefin itself is a discovery technique. You’re discovering which discovery technique to use. It’s meta-discovery. But it’s the shortest technique in the whole toolkit – five minutes in a planning session – and it pays for itself immediately.”

Charlotte adds one final observation. “The best thing about Cynefin is that it validates what experienced people already know intuitively. Tom has been saying ‘just build it’ for months about certain stories. He was right. But without a shared framework, ‘just build it’ sounds like laziness. With Cynefin, ‘just build it – it’s Clear’ is a reasoned decision. It respects the team’s accumulated understanding instead of treating every story as if nobody’s ever thought about it before.”

Not everything needs a workshop. The discipline is knowing which things do.

The teams now know which problems need deep discovery and which need a quick conversation. But there’s one category of thinking they’re not doing at all – not for Clear work, not for Complex work, not for any of it. Nobody is systematically asking what could go wrong. What happens when the LLM writes code that works perfectly but logs credit card numbers to a debug console? That’s next (coming 13 October).

Questions or thoughts? Get in touch.