Impact mapping
Gojko Adzic's four-level visual model — Why → Who → How → What — for connecting deliverables back to business goals. Prevents teams from shipping features no one asked for by making the causal chain from business objective to implementation artifact explicit and testable.
What it is
Gojko Adzic published Impact Mapping: Making a Big Impact with Software Products and Projects in 2012, building on ideas from his earlier work on specification by example and on the goal-oriented practices he saw succeeding across consulting engagements. The book is short (about 86 pages) and the technique itself fits on one page — the signature strength of impact mapping is its simplicity.
An impact map is a mind-map drawn left-to-right through four levels:
- Why (goal) — a single business objective, specific and measurable. “Increase monthly active users by 20%,” not “be better.”
- Who (actors) — the people whose behavior must change for the goal to be achieved. Customers, users, internal teams, partners.
- How (impacts) — the behavior change each actor must make. What does a customer now do that they didn’t before? What does an internal team now do?
- What (deliverables) — the features, capabilities or changes the organization will ship to create that behavior change.
Drawn as a tree, the Why sits at the far left as the single root. Actors branch off the goal. Each actor has impacts branching from them. Each impact has one or more deliverables branching from it. The right-hand edge is where the team-level backlog lives; everything to its left is the rationale.
The discipline is bidirectional:
- Top-down: build the tree left-to-right. What’s the goal? Who matters? How must each actor change? What must we build?
- Bottom-up: audit existing work by tracing it right-to-left. Does this feature trace back to a real impact? Does that impact matter to a real actor? Does that actor matter to the goal?
Items that can’t trace to a root are candidates for the cut list. This is what makes impact mapping distinct from roadmapping or backlog planning: it’s a causality tool, not a sequencing tool. A feature that can’t justify itself through the four levels is a feature whose justification is hiding somewhere — either in an unexamined assumption, or not at all.
Impact mapping pairs naturally with OKRs. The Why is the Objective; the combination of Who + How becomes the Key Results; deliverables are the work that makes the key results happen. Teams that already use OKRs sometimes discover their existing objective isn’t specific enough to produce a usable Why, which is itself a useful outcome.
When to use it
Reach for impact mapping when:
- A team keeps shipping features that don’t move the needle and no one can quite explain why a given feature is on the roadmap. Impact mapping surfaces the missing rationale.
- The OKR process has stalled — objectives feel aspirational but key results don’t connect to actual team work. Impact mapping bridges the gap with the How/What layers.
- Multiple teams are working on adjacent features without a shared goal. Mapping their deliverables back through impacts and actors to a shared goal (or surfacing that there isn’t one) is a high-value alignment exercise.
- You’re at the start of a new initiative and the roadmap is being written. Running impact mapping before the first sprint prevents most of the “why are we building this?” rework that shows up two quarters in.
- A stakeholder keeps asking for features and the team isn’t sure how to say no or redirect. Impact mapping gives the team a language: “that’s a What — what impact does it produce, for whom, against which goal?” The stakeholder either finds the path or drops the ask.
Don’t reach for impact mapping when:
- The team is executing on already-validated work. Impact mapping is a planning tool, not an execution tool. Mid-sprint, reach for example mapping or 5 Whys, not impact mapping.
- You’re at story-level scope. Impact mapping operates at program- or quarter-level. For individual stories use example mapping or event storming.
- The team is unwilling to name a specific goal. Impact mapping requires the Why to be concrete and measurable. “Be better” isn’t a Why; it’s a vibe. If leadership can’t articulate the goal specifically, surface that first — the mapping exercise won’t produce useful output until the goal is real.
- The organization is allergic to the question. In some cultures, asking “why are we building this?” about a pet project produces political friction rather than insight. Know your audience; consider whether surfacing the causal chain will produce the intended clarifying effect or a defensive response.
How to run it
Total time: 90 minutes to half a day. Shorter sessions work for narrow scopes (a single feature’s rationale); half-day sessions cover program- or quarter-level planning.
Set the Why (10–15 min). A single goal, written at the far left of the wall. Specific, measurable, time-boxed where possible. “Increase monthly active users from 100k to 120k by end of Q2.” The team has to agree on this before anything else; if there’s disagreement, stop and resolve it — an impact map built on a fuzzy goal produces fuzzy outputs.
If the goal-setting itself is the hard work, consider running OKR planning first, then impact-mapping against the outcome.
Identify the Whos (15 min). Who are the actors whose behavior matters for this goal? Branch them off the Why. Typical actor categories:
- Primary actors — the customers, users or direct beneficiaries of the change.
- Supporting actors — internal teams, partners, vendors who enable the primary actor’s behavior.
- Hostile actors — actors whose behavior works against the goal (competitors, malicious users, regulatory pressures). Surprisingly useful: the hostile-actor branch often reveals risks the team was ignoring.
Be specific. “Users” is too broad; “new users in the first 30 days” and “returning users who haven’t engaged in 60 days” are distinct actors with different impact needs.
Explore Hows (30 min). For each actor, what behavior change is needed for the goal to be achieved? What do they do now that you want them to do differently? Branch Hows off each Who.
- “New users complete the sign-up flow without dropping out at step 3.”
- “Returning users come back within 30 days of a prior session.”
- “The support team resolves the top 5 ticket categories without escalation.”
The Hows are impacts on behavior, not features. The distinction is load-bearing — impacts are what the actor does; features are what we build to enable the impact.
Connect to Whats (30 min). For each impact, what deliverables would create that behavior change? Branch Whats off each How.
- “New users complete the sign-up flow without dropping out at step 3” → “Redesign step 3 to require fewer fields,” “Add progress indicator,” “Pre-fill known fields from prior context,” “Run an A/B test on step 3 layouts.”
Every What must have a visible path back to a How, a Who and a Why. If a team member proposes a What that doesn’t trace, ask them to articulate the path. Often the exercise surfaces a missing How or a missing Who — in which case, add them. Sometimes it surfaces that the What doesn’t actually matter for this goal — in which case, cut it.
Audit existing work (20 min). Take the current roadmap or backlog and ask: does each item trace through the four levels on this map? Mark:
- Traces cleanly → valid work. Keep.
- Traces ambiguously → clarify or cut. Which impact does it serve? Who does it help?
- Doesn’t trace → cut candidate. Either it belongs to a different goal (which should have its own map) or it doesn’t belong anywhere.
Don’t cut on the spot; surface the list and let the product owner own the decision. The impact map is input to the prioritization conversation, not a replacement for it.
Commit to experiments (20 min). The right-most level (Whats) is where team-level work lives, but a mature impact-mapping practice treats Whats as hypotheses — experiments that might or might not produce the intended impact. For each high-priority What, define:
- What we’ll measure to see if the What produces the How.
- By when we expect to see the measurement.
- What we’ll do if the measurement says the What didn’t work.
This framing prevents the common failure mode of shipping features and moving on without checking whether they actually changed behavior.
Worked example
A product team is responsible for a subscription-based fitness app. The company’s quarterly business goal: grow monthly active users from 80k to 100k by end of Q2. The team runs a half-day impact-mapping session.
Why: MAU 80k → 100k by end of Q2.
Whos (actors):
- New free-trial users (people who signed up but haven’t completed first workout)
- Existing paid subscribers who haven’t opened the app in 14+ days (lapsing actors)
- Power users (5+ sessions/week) — candidates for advocacy
- Support team — could reduce churn from frustration-driven cancellations
- Competitors — hostile actor
Hows (per actor):
| Actor | Impact |
|---|---|
| New trial users | Complete the first workout within 48 hours of signup |
| Lapsing subscribers | Return to the app within the next 30-day billing cycle |
| Power users | Invite at least one friend per quarter |
| Support team | Resolve cancellation requests without losing the customer, 50% of the time |
| Competitors | Fail to differentiate on the features we’re improving |
Whats (per impact, illustrative subset):
| Impact | Deliverable candidates |
|---|---|
| First workout in 48 hours | Onboarding flow redesign; day-2 reminder email; buddy-match feature for first workout; 3 pre-curated “starter” routines visible on first launch |
| Return within 30 days | Re-engagement push notification based on prior session type; personalized “pick up where you left off” UI; limited-time content offered exclusively to lapsing users |
| Invite a friend | In-app referral flow with one-tap share; referral reward for both sides; post-workout social share with challenge link |
| Support resolves cancellations | Cancellation-flow UX with retention offer; support tooling that surfaces user’s engagement history; permission to offer one-off discounts |
| Competitor differentiation | Content partnership with a high-profile trainer; hardware integration with a major fitness-wearables brand |
Backlog audit results. The team runs their existing roadmap (18 items) against the map:
- Traces cleanly: 11 items. Most of these are variations on onboarding and re-engagement work — consistent with the Whos that dominate the MAU goal.
- Traces ambiguously: 4 items. Need clarification. Two are “improve loading performance” initiatives whose impact-link is plausible but not measured. The PO owns an exercise to either tie them to a measurable How (reduce abandonment during a specific workout type) or deprioritize them.
- Doesn’t trace: 3 items. A “dark mode” feature (user request, no impact on any actor’s behavior), a refactor of the subscription billing system (infrastructure, doesn’t belong on this map — deserves its own Why), and a new marketing-site redesign (belongs to marketing’s map, not product’s).
Commitments.
- The onboarding redesign is the #1 priority because two Hows (first workout, 30-day return) depend on it.
- The refactor of the subscription billing system gets its own impact map with a different Why (“reduce billing-related support volume by 50%”).
- Dark mode is deferred. Not killed — may trace to a future goal — but doesn’t earn priority under the MAU goal.
- Each committed What gets a measurement plan: what data, by when, from whom. Failed experiments get surfaced at quarter-end and inform the next map.
Without the impact map, the team would likely have kept all 18 items in flight, shipped dark mode because someone asked loudly, and ended Q2 having improved many things but not meaningfully moved MAU.
Common failure modes
- Vague Whys. “Grow the business” isn’t a goal; “increase MAU by 20% in Q2” is. Without a measurable Why, the whole map drifts. Fix: push until the Why is specific, measurable and time-boxed. If leadership resists, surface that as the blocker.
- Confusing Hows with Whats. “Build an onboarding flow” is a What. “Users complete sign-up in under 3 minutes” is a How. The distinction matters because Whats without Hows leave the behavior-change layer invisible — the team ships the feature and never checks whether it actually changed behavior. Fix: for each proposed How, ask “what behavior is changing, by whom?” If the answer is “we’re shipping X,” that’s a What, not a How.
- Skipping the backlog audit. The map is built but the existing roadmap is left alone. The audit is where the work happens — where you cut the deliverables that can’t trace. Without it, impact mapping becomes theatrical; the team looks aligned but the backlog is still a pile of miscellaneous asks.
- Treating Whats as guaranteed outputs. A What is an experiment — a bet that it produces the How. Without a measurement plan, the team ships the feature, declares victory, and doesn’t check whether the How actually materialized. Fix: every priority What gets a measurement + by-when + if-not-then plan.
- Too many Whys on one map. One goal per map. If the team has three goals, they need three maps — or they need to pick the dominant one and map it first. A map with three Whys at the root is a map of three different products.
- Using impact mapping to justify after-the-fact. Retrospectively building an impact map to show why existing features “trace cleanly” to a goal is rationalization, not planning. The tool’s value is in driving future decisions, not defending past ones. If you’re impact-mapping an already-shipped roadmap, be honest that it’s an audit, not a plan.
References
In the playbook
- 2.1 Product strategy — where personas (the Who layer’s foundation) first get defined.
- 2.2 Roadmaps and OKRs — the natural home for the Why → Who → How cascade.
- Planning workshop — calls out impact mapping as the heavier follow-up when systemic planning problems (e.g., “we don’t align on what ‘done’ means”) surface repeatedly.
- Journey mapping — complementary technique at a different altitude; journey mapping shows one actor’s experience, impact mapping shows many actors’ behavior change.
- Event storming workshop — the operational-discovery companion; impact mapping says “what behavior do we need,” event storming says “what events does the system record to prove it.”
External references
- Gojko Adzic, Impact Mapping: Making a Big Impact with Software Products and Projects (Provoking Thoughts, 2012) — the canonical book; short and practitioner-oriented.
- ImpactMapping.org — the community site with worked examples and workshop templates.
- Adzic’s Specification by Example (Manning, 2011) — broader companion on connecting business goals to technical deliverables.
- John Doerr, Measure What Matters (Portfolio, 2018) — the OKR practitioner reference; pairs well with impact mapping for goal-setting.
- OKR Coach — Impact Mapping + OKRs — Atlassian’s practitioner guidance on using both together.