Rework workshop

A guided workshop for discovering the root causes of rework waste and converting findings into specific, owned actions.

Recommended owner: Technical team members, led by a tech lead or senior engineer who can speak to requirements, acceptance criteria and implementation compliance.

Goals

What the team will do during the workshop:

  1. Catalog every rework case from the sprint with measured hours.
  2. Classify each case by origin (upstream, process, compliance or validation).
  3. Perform 5-whys on the top three cases to identify root causes.
  4. Commit to specific actions with owners and expected recovery.

Impact

What the team will walk away with:

  1. A shared understanding of where rework originates in the delivery pipeline.
  2. Three concrete actions in flight, each with a named owner and a measurable outcome.
  3. A baseline for comparing rework hours next sprint.
  4. A culture shift toward engineers owning quality, rather than inspecting for it downstream.

When to run this workshop

The rework workshop earns its slot when the readout puts rework hours in the dominant or co-dominant position — usually anything above ~30% of non-value-add for the sprint. The whole exercise is built around tracing rework cases back to their origin in the delivery pipeline, so it needs real cases from the most recent sprint to chew on. Without them, the workshop devolves into hypotheticals.

Reach for it when:

  • The GembaKai readout shows rework as the top category — the team has the cases, the hours and the appetite to fix them.
  • The same kinds of rework keep showing up sprint after sprint — that’s a pattern, and the four origin categories (Upstream / Process / Compliance / Validation) are designed to expose where in the pipeline the pattern is being created.
  • Quality conversations have stalled on opinions — when “we should write better acceptance criteria” has been said three times with no change, the workshop’s measured-hours framing breaks the loop.

Don’t reach for it when the readout’s rework hours are in the trivial-many tail (small absolute numbers; Pareto won’t move), when the team can’t bring 3–5 specific cases to the session (it’ll be a guessing game) or when the dominant rework cause is already known and a single owner is already on the fix.

Audience and personas

Facilitator

Tech lead or senior engineer. Familiar with the sprint’s work and the team’s delivery pipeline. Neutral in discussion — surfaces root causes without assigning blame.

Required participants

  • Engineers who did the work that required rework
  • Product owner or BA responsible for requirements on affected items
  • QA or quality engineer (if on the team)

Optional but valuable

  • Customer representative or customer proxy (if rework was tied to requirements ambiguity)
  • Scrum master (observer, not participant)

Roles

Assign these before the workshop so everyone knows their job.

RoleResponsibility
FacilitatorGuides discussion, enforces time-boxes, stays neutral.
Time-keeperWatches the clock and signals transitions.
ScribeCaptures findings, root causes and commitments in a shared doc.
Action-item ownerMakes sure every action has a name attached before close.

The four origin categories

Every rework case will be classified into one of these four buckets. Teach them in the opening.

1. Upstream — customer involvement and requirements capture
Rework originates before the requirement was written. The customer wasn’t engaged at the right time, the problem wasn’t understood or the feature was built against an assumption that turned out to be wrong.
“We built what they asked for but it’s not what they wanted.” “Customer saw the demo and changed their mind.” “We found out mid-sprint that the stakeholder had a different use case in mind.”
2. Process — requirements scoping and acceptance criteria
Rework originates in the requirement itself. The requirement was ambiguous or incomplete, acceptance criteria was missing or vague and the engineer had to guess.
“The ticket didn’t cover this edge case.” “AC said ‘must be performant’ but nobody said how fast.” “The story wasn’t broken down small enough and it hid complexity.”
3. Compliance — implementation drift from the written AC
Rework originates in implementation. The requirement was clear and complete but the engineer didn’t fully meet it — AC was skipped, short-cut or misread.
“I forgot to handle the 404 case that was in the AC.” “The implementation doesn’t match the design we agreed on.” “We didn’t run the acceptance tests before calling it done.”
4. Validation — value-stream drift
Rework originates in validation. The team thought they were done but couldn’t confirm it end-to-end. Drift from original intent wasn’t caught until production or UAT.
“It worked in my environment.” “Nobody realized the integration broke until the customer tried it.” “Our staging data looks nothing like production.”

Pre-homework

Send 1–2 weeks out, remind 2 days before.

Reading (for all participants)

Individual prep (~30 min)

Each engineer and PO/BA brings:

  1. A list of 3–5 specific rework cases from the sprint. Short description plus a rough hours estimate per case.
  2. For each case, a one-line guess at origin — where does the rework come from?

Team prep (done by facilitator, ~15 min)

  • Pull rework hours from the readout’s rework page.
  • Prepare a 2×2 grid (digital or physical) with the four origin-category regions.
  • Prepare the action-item template (see below).

Materials and setup

  • Whiteboard or Miro/Mural board
  • Sticky notes in four colors (one per origin category)
  • Markers and dot-vote stickers
  • Printed or shared copy of the readout’s rework page
  • Shared doc for action items

Agenda (2h 40m)

TimeSegmentLeader
0:00–0:10Opening, readout recap, ground rulesFacilitator
0:10–0:30Rework catalog — share cases, scribe capturesFacilitator
0:30–1:00Classify each case into origin categoriesTeam
1:00–1:10Pareto analysis on origin totalsFacilitator
1:10–1:15Short break
1:15–2:00RCA on top three cases — 5 Whys or fishbone, per groupGroups of 3
2:00–2:15Cluster root causes, look for patternsFacilitator
2:15–2:35Draft action items, assign ownersTeam
2:35–2:40Commitments, closeFacilitator

Workshop exercises

Exercise 1 — Rework catalog (20 min)

Each participant reads their prepared cases one at a time. The scribe captures on sticky notes:

  • Case description (one sentence)
  • Hours spent on rework
  • Who did the work
  • (Leave the origin category blank for the next exercise)

Prompt the team: “Are there cases we missed? Half-day tasks that blew up? Defects we quietly absorbed?”

Exercise 2 — Classify by origin (30 min)

Lay out the four origin-category regions on the board. Walk through each sticky. For each, the facilitator asks:

  1. When did we first realize this work needed redoing?
  2. What information did we lack at the time?
  3. If that information had been available sooner, would we have avoided the rework?

Place the sticky in the category it fits best. Disagreements are valuable — capture them as separate cases rather than force-fitting.

Expected output: Hours totaled by origin category. The scribe does this live so the team can see where the mass is.

Pareto analysis of origin categories (5–10 min)

Sort the four origin-category totals from largest to smallest and cumulative-sum down the list. The categories that together reach ~80% are the vital few — usually one or two of the four. Circle them; they set the focus for Exercise 3.

CategoryHours% of totalCumulative
Upstream (requirements capture)1444%44%
Validation (value-stream drift)1134%78%
Process (AC quality)516%94%
Compliance (implementation drift)26%100%
Total32

Upstream and Validation together carry 78% of the hours. In Exercise 3, the three groups each pick a case from one of those two categories. Process and Compliance cases get logged but are not the focus this sprint.

Watch for: teams will object — “but this Compliance case really upset us.” Emotional weight and hour-weight are different. Send emotional-but-small-hours cases to a side conversation.

Full method, facilitator notes and common failure modes → Pareto analysis.

Exercise 3 — Root-cause analysis on top three (45 min)

Pick the three cases with the most hours from the leverage categories surfaced in the Pareto check. Break into groups of 3 (one per case).

Each group picks one of the following two techniques based on the case. They’re interchangeable for this purpose — use whichever fits the data best.

Option A: 5 Whys (best for linear causal chains)

State the rework case, ask “why did this happen?”, write the answer, then ask “why?” of that answer, and keep going. Typically 4–6 whys. Fast, and works best when the cause chain is relatively linear. Vulnerable to stopping too early or following only one branch.

Full walkthrough and common failure modes → 5 Whys.

Option B: Fishbone / Ishikawa diagram (best when multiple factors contributed)

Use when the rework has several contributing causes. Draw the fishbone head with the rework case as the “effect,” then brainstorm contributors across six software-team categories — People, Process, Tools, Data, Environment, External. Pick the one or two branches carrying the most weight and drill in with 5 Whys on those. This prevents 5 Whys from tunnel-visioning onto a single chain when multiple factors conspired.

Full walkthrough, category definitions and common failure modes → Fishbone diagram.

Share back

Each group shares its chain or diagram back to the room. The scribe captures the terminal root cause and any notable secondary causes.

Common terminal root causes by category:

  • Upstream: “The customer never reviewed the mockup.” “The demo cadence is monthly — feedback arrives too late.” “We don’t involve the customer until acceptance testing.”
  • Process: “We don’t write acceptance criteria for non-happy paths.” “AC is written by the PO and never challenged by engineering.” “Our stories aren’t small enough to expose hidden complexity.”
  • Compliance: “Our definition of done doesn’t require running all the AC.” “Code review doesn’t check against AC.” “We skip the BDD scenarios when we’re behind.” “Our definition of done wasn’t followed.”
  • Validation: “No automated end-to-end test for this flow.” “Staging doesn’t match production.” “We don’t have observability into this path.”

Exercise 4 — Cluster and look for patterns (15 min)

Put all root causes on the board. Ask: “Are multiple cases pointing at the same systemic issue?” If yes, cluster them — that cluster is the biggest lever. A single fix at the top of the pipeline often eliminates rework in multiple places downstream (see How to fix a $25 bug before it becomes a $37,500 problem).

Exercise 5 — Action items (20 min)

For each cluster and each remaining high-impact root cause, draft an action using the template below.

Action-item template

FieldWhat goes here
ProblemSpecific, with measured hours.
Origin categoryUpstream / process / compliance / validation.
Root causeFrom 5 Whys or fishbone.
ActionWhat we will do — specific and small enough to complete in one sprint.
OwnerA person, not a team.
Expected recoveryHours per sprint.
Review dateNext waste walk or earlier.

Example action

FieldValue
Problem12 hours this sprint rebuilding the checkout flow after customer demo feedback.
Origin categoryUpstream.
Root causeCustomer sees working software once per month; by then the design is too far along to change cheaply.
ActionAdd a mid-sprint customer review of in-progress work (30 min, every sprint), scheduled Wednesday of week 2.
OwnerJane (PO).
Expected recovery~8 hours per sprint.
Review dateNext waste walk (6 weeks out).

Capturing results

Within 24 hours, the scribe produces:

  1. Summary of all rework cases with origin category and hours.
  2. The action-item list with owners and review dates.
  3. Root-cause themes and any systemic issues flagged.

Post to the team’s shared space and link back from the waste-walk readout page so the history is visible in context.

For any root cause that warrants tracking across multiple sprints, use the A3 problem-solving format described in the Alternative Exercises section below — it travels better than a sticky-note capture.

Alternative and complementary exercises

Swap these in when the standard agenda doesn’t fit, when a specific origin category dominates or when the team needs a forward-looking variant. All are well-established techniques with substantial facilitation literature behind them.

Example mapping (for preventing process/compliance rework in the next sprint)

Matt Wynne’s 15–25-minute exercise for unpacking an upcoming story’s acceptance criteria before pulling it into a sprint. Participants use four card colors:

  • Yellow — the story
  • Blue — rules / acceptance criteria
  • Green — examples illustrating each rule
  • Red — questions nobody can yet answer

A story that accumulates a lot of red cards is not ready. A story with rules that don’t have examples is under-specified. Example mapping is the single highest-leverage preventive technique for AC-gap rework (Process and Compliance origin categories in this workshop).

Use it as: A recommended follow-on. After the team sees how much rework came from Process or Compliance origins, commit to running example mapping on the next 3 stories entering the sprint.

Related — event storming (below) works at the epic/domain level rather than the story level. If example mapping on individual stories isn’t catching enough — particularly when Upstream rework keeps showing up — an event storm on the upcoming domain is the larger-scope preventive move. The two techniques are complementary: example mapping for each story as it’s pulled in; event storming for each new epic or domain area before stories are even written.

Pre-mortem — forward-looking variant

Imagine next sprint has just ended and rework has spiked. Work backward from that imagined failure to surface the risks while changes are still cheap. Pairs well with this workshop’s output — a pre-mortem on next sprint asks “what haven’t we fixed yet?”

Use it as: A 20-minute extension if time remains, or a separate session two weeks out. Full walkthrough → Pre-mortem.

Fishbone subcategories for heavy domains

If rework is heavily concentrated in one origin category, scope a fishbone specifically to that category. For example, 80% Upstream → categories Customer access, Requirements elicitation, Feedback cadence, Demos/reviews, Stakeholder alignment, Decision-making authority. Produces a richer picture than 5 Whys for domain-specific patterns. See Fishbone diagram for the method.

A3 problem solving (for systemic root causes)

When the root cause is too big to resolve in a single sprint — systemic process gaps, recurring upstream failures, compliance issues that span stories — a sticky-note capture loses context over time. A3 is Toyota’s single-page problem-solving method, designed for root causes that need sustained attention across sprints, teams or leadership conversations.

Produce an A3 when the root cause meets at least one of these: requires more than one sprint’s action to close; spans multiple teams; has appeared across two or more consecutive waste walks; or will benefit from leadership visibility (budget, external dependency, organization change). Otherwise the action-item template earlier in this guide is enough.

Full structure, template and worked example → A3 problem solving.

Event storming (preventive follow-on for Upstream/Process rework)

A substantial share of rework — often the bulk of the Upstream and Process categories — comes from misaligned or missing shared understanding of the domain. When the readout shows heavy Upstream or Process rework on a specific domain (checkout, billing, onboarding), a strategic event storm on that domain before the next implementation begins usually surfaces more than any story-level exercise would. Treat it as the “scope-up” action when example mapping at story level isn’t enough — a separate half-day session with customer involvement, committed to as an action item from this workshop, not run inside it. Full technique, color scheme, mini-storm variant and when-not-to-use guidance → Event storming workshop.

Follow-up

  • Facilitator checks in at the mid-sprint mark: are actions on track? Any blockers?
  • Next waste walk: rerun the rework section of the readout and compare. Did hours drop? Did they shift categories?
  • Close the loop: celebrate actions that moved the needle. Revisit actions that didn’t — something was wrong with the root cause, the action or the owner.

References

Process guide

  • 5 Whys — the cause-chain technique used in Exercise 3 Option A.
  • A3 problem solving — the single-page format for systemic root causes that span multiple sprints.

Delivery Playbook articles

External references

  • Sakichi Toyoda, “5 Whys” root-cause analysis (Toyota Production System).
  • Kaoru Ishikawa, “Fishbone (cause-and-effect) diagram” — see ASQ’s fishbone overview for the classic 6 Ms and facilitation guidance.
  • Matt Wynne, Introducing Example Mapping (Cucumber blog) — the yellow/blue/green/red card method.
  • Alberto Brandolini, Introducing EventStorming (Leanpub, 2013) — the source text; see also the official EventStorming site.
  • Gary Klein, Performing a Project Premortem (Harvard Business Review, 2007) — see Atlassian’s pre-mortem playbook for a modern workshop adaptation.
  • Durward K. Sobek II and Art Smalley, Understanding A3 Thinking (Productivity Press, 2008) — canonical treatment of Toyota’s A3 methodology.
  • Lean Enterprise Institute, A3 templates and worked examples.
  • Vilfredo Pareto, Cours d’économie politique (1896) — the original observation (about income distribution, not quality).
  • Joseph Juran, Quality Control Handbook (McGraw-Hill, 1951) — the generalization to quality work under the “vital few vs. trivial many” framing; the 80/20 reading of Pareto is Juran’s, not Pareto’s.
  • Agile Warrior, The Agile Inception Deck.
  • Open Practice Library, Retrospective practices.