Force Field Analysis
Kurt Lewin's change-management discipline. Every proposed change is a tug-of-war between driving forces and restraining forces. If the restrainers outweigh the drivers, the change doesn't stick — no matter how good the idea is. A two-minute check that prevents the "generated actions, shipped nothing" retrospective failure mode.
What it is
Kurt Lewin was a founding figure in modern social psychology and one of the first to take group dynamics seriously as a research subject. His 1947 paper Frontiers in Group Dynamics: Concept, Method and Reality in Social Science introduced Force Field Analysis alongside the broader three-step model of change (unfreeze → change → refreeze) that shaped decades of organizational-change literature. The force-field metaphor is drawn from physics: a body at rest is in equilibrium between opposing forces, and change happens when the balance tips.
The method is simple enough to draw on a napkin. Write the proposed change at the top of a page. Draw a horizontal line representing the current state. Above the line, list the driving forces — things pushing for the change. Below the line, list the restraining forces — things pushing against it. Weigh them. Change happens when drivers exceed restrainers, either by strengthening drivers or — Lewin’s preferred route — by weakening restrainers.
Lewin argued that strengthening drivers tends to generate proportional opposition (“we’ll do it harder” provokes “we’ll resist harder”). Weakening restrainers is a more sustainable intervention: identify what’s pushing back and neutralize it rather than overpower it. In a workshop context this is why the analysis is useful — it forces the team to articulate what will push back, which is the information most retrospective plans silently omit.
The technique is especially well-matched to retrospective actions, change-management programs, process-improvement initiatives and any commitment where the team has history of generating intentions that didn’t stick. It doesn’t replace harder planning tools (Kotter’s 8-step, ADKAR, OKRs); it’s a fast per-commitment sanity check that takes minutes and prevents a common failure mode.
When to use it
Reach for Force Field Analysis when:
- The team is about to commit to an action from a retrospective or workshop and you want a fast check that the action can actually stick. Works especially well paired with the pre-mortem — pre-mortem surfaces what could go wrong generally; force field surfaces what will push back specifically.
- A prior change attempt failed and you want to understand why before retrying. Mapping the restraining forces from the previous attempt often reveals structural issues that the original plan didn’t address.
- An organizational change program is being planned at any scale — team process change, tooling migration, role redesign, cultural shift. Force field works at any scale because the mechanics are scale-invariant.
- The team has a pattern of producing long action lists that don’t get executed. Force field breaks the pattern by filtering the list before commitment — if the restrainers can’t be weakened, the action gets rewritten or dropped rather than committed-to-fail.
Don’t reach for Force Field Analysis when:
- The decision is technical and has no social dimension. “Which database index do we add?” doesn’t need a force field analysis. The technique is for change whose success depends on people.
- You’re doing it theatrically. A force field with three obvious drivers and no restrainers isn’t analysis; it’s affirmation. If the restrainer side is hard to populate, either the change is genuinely unopposed (rare) or you’re not being honest (common). Push back.
- The change is already underway and the team is unblocked. Don’t slow committed work with an analysis whose purpose is to decide whether to commit.
- The team is unwilling to name restraining forces — usually because they’d be naming political realities, senior stakeholders, or team members by implication. Without willingness to name, the technique produces a sanitized list that misses the actual obstacles. Fix the trust issue first or use a different framework.
How to run it
Total time: 2–5 minutes per action in workshop context; 20–30 minutes as a standalone analysis for a major change.
State the change as a verb-phrase (30 sec). Not “better code reviews” — “all PRs require two reviewer approvals before merge.” Specific, observable, committable. A force field analysis of a fuzzy change produces a fuzzy analysis.
Draw the frame (30 sec). Horizontal line across the board or page; the change written above. “Driving forces” on top; “restraining forces” below. This is usually enough visual to start.
List driving forces (1–2 min). What’s pushing for this change to happen? Team frustration, data, leadership support, external pressure, individual champions, prior commitments. Try to name specific forces rather than generic ones — “security team escalation last sprint” is more useful than “quality concerns.”
List restraining forces (2–3 min). The important step. What will push back? Habit, workload, skill gaps, tooling, political cost, someone’s pride, the current incentive structure, the fact that the existing way is fine-ish and the new way is unproven. Be honest. A list of cartoon obstacles misses the real ones.
Common restrainers worth prompting for:
- “We’ve always done it this way” — genuine cost of habit, especially when the existing way works acceptably.
- “The champion is one person, and they’ll leave or burn out” — a very common restrainer for technically-correct changes.
- Tooling or capacity — the change requires a tool the team doesn’t have, or time the team can’t carve out.
- Political cost — someone with clout opposes this change (named or not) and the team is avoiding the confrontation.
- Opportunity cost — adopting this change means not adopting something else the team wants more.
- Ambiguity — the team doesn’t actually know what “done” looks like for this change.
Weigh the forces (1 min). Relative strength, not absolute. Which side would win if the change were attempted today? Use arrow thickness, numeric scores, dots — any method the team finds readable.
Decide the move (1–2 min). Three options:
- Strengthen drivers — add pressure, evidence, advocacy.
- Weaken restrainers — identify what’s pushing back and neutralize it. Lewin’s preferred path and usually more durable.
- Revise the change — the current form won’t survive. Rewrite it into a form the team can commit to.
If none of the three can be done, don’t commit to the change. A change committed-to-fail is worse than no change because it erodes the team’s trust in its own retrospectives.
Worked example
A team’s retrospective surfaces a planning-waste problem: a weekly 1-hour architecture sync with 8 attendees, purely “inform,” costing 8 team-hours per week. They propose an action: replace the sync with a weekly async written update.
Drawn on the board:
| Driving forces (pushing for the change) | Restraining forces (pushing against it) |
|---|---|
| Saves 8 team-hours per week. | People feel recurring meetings = legitimacy; cancelling feels like devaluing the topic. |
| Written updates persist and are searchable. | The tech lead prefers live conversation — worries that async will miss nuance. |
| Async-friendly for the team’s two remote members. | Nobody reads docs unless forced; the update might be written and then ignored. |
| Recent read-through of the sync notes showed 70% of topics could have been a doc. | If a topic needs real discussion, there’s no longer a default forum. |
| Engineering manager already supports cancelling low-value recurring meetings. | Losing the meeting feels like the tech lead “losing” something. |
Weighing. Drivers are strong — the time savings are significant and the data supports the change. Restrainers are mixed: two are about feelings (legitimacy, losing face), two are structural (nuance loss, fallback forum gap), one is about behavior (docs getting ignored).
The analysis surfaces that the change as proposed will probably fail, not because of the drivers-vs-restrainers arithmetic, but because the structural restrainers are real. Written updates can miss nuance; async-only does lose a fallback forum for hard conversations; unread docs are a pattern.
The team revises the action:
- Replace the weekly sync with a weekly async written update + a monthly 30-minute live sync for open questions and hard conversations. (Addresses the “nuance” and “fallback forum” restrainers.)
- Require the async update to be briefly discussed in the standup the day after for 5 minutes. (Addresses the “docs don’t get read” restrainer.)
- Frame the change publicly as “the tech lead is consolidating architecture attention into fewer, higher-signal forums,” not “cancelling the architecture meeting.” (Addresses the “losing face” restrainer.)
The revised action has stronger structural support. The drivers are unchanged; the restrainers are meaningfully weaker. The team commits with higher confidence that it will actually stick.
Six weeks later, the async update is working. The monthly sync has been used twice to resolve architecture questions that were genuinely conversation-shaped. The team recovered ~6 of the 8 hours per week, kept the nuance-surface, and didn’t fracture the tech lead’s standing in the process. None of that would have survived a blunt “cancel the meeting” action.
Common failure modes
- Empty restrainer column. The team lists four drivers and no restrainers. Either the change is genuinely unopposed (very rare for any change worth analyzing) or the team is being polite. Push: “what will make this fail in six weeks?”
- Treating forces as additive scores. Counting “5 drivers, 3 restrainers, therefore go” misses the point — a single strong restrainer (a stakeholder who will veto) outweighs five moderate drivers. Force field is about relative strength, not count.
- Skipping the restrainer-weakening pass. The team identifies restrainers, acknowledges them, and commits anyway. Predictable failure. Fix: for each meaningful restrainer, either weaken it or revise the action to sidestep it. No standalone “acknowledge and proceed.”
- Using it as decoration. Force field analysis post-hoc, after the decision is made, is theater. The discipline has to happen before commitment.
- Running it at the wrong altitude. Force field on “improve quality” is too abstract to produce useful restrainers. Force field on “add two code reviewers” is specific enough to surface real opposition. Lewin’s mechanics assume a concrete, committable change.
- Lewin-ism without Lewin. The three-step unfreeze/change/refreeze model is the fuller context and is worth knowing — force field analysis alone is a diagnosis, not a plan. For bigger initiatives, pair it with a more complete change-management framework.
References
In the playbook
- Planning workshop — explicitly uses Force Field Analysis as a per-action check before every commitment.
- Pre-mortem — surfaces what could go wrong; force field surfaces what will push back. Pair them for higher-stakes actions.
- Lightning Decision Jam — fast-paced decision format where force field works as a lightweight sanity check on the final commitments.
External references
- Kurt Lewin, “Frontiers in Group Dynamics: Concept, Method and Reality in Social Science; Social Equilibria and Social Change,” Human Relations, Vol. 1, Iss. 1 (1947) — the original paper. Dense but foundational.
- Kurt Lewin, Field Theory in Social Science (Harper & Row, 1951) — posthumous collected works; the broader theoretical frame.
- SessionLab, Force Field Analysis — workshop-ready walkthrough with templates.
- MindTools, Force Field Analysis — practitioner-oriented guide.
- John P. Kotter, Leading Change (Harvard Business Review Press, 1996) — 8-step change framework that Lewin’s work underpins. Useful when force field analysis surfaces a change too big for the workshop.
- Jeffrey M. Hiatt, ADKAR: A Model for Change in Business, Government and Our Community (Prosci, 2006) — individual-level change framework; complements Lewin’s system-level one.