You've almost certainly seen the statistic: 70% of change initiatives fail. It's in McKinsey decks, Deloitte reports, Gartner research, and thousands of management articles. It's one of the most widely cited numbers in business.
It also has no empirical basis. At all.
The 70% figure traces to a 1993 book by Michael Hammer and James Champy, who wrote: "Our unscientific estimate is that as many as 50 percent to 70 percent of organizations that undertake a reengineering effort do not achieve the dramatic results they intended." They called it unscientific. Hammer later said: "There is no inherent success or failure rate for reengineering." The figure then appeared in a 2000 Harvard Business Review article by Beer and Nohria, presented with zero citation, zero methodology, zero evidence. Kotter's version was a personal estimate he acknowledged as such.
Mark Hughes traced every published instance of the 70% claim in a 2011 Journal of Change Management paper and found "the absence of valid and reliable empirical evidence" in every case. Deloitte cites HBR, which cites McKinsey, which cites Kotter. It's circular all the way down.
Here's the problem with this debunking: the rigorous data that does exist suggests the real failure rate is higher.
What the Rigorous Data Actually Shows
The studies with actual methodology — defined samples, defined success criteria, longitudinal tracking — tell a consistent story. It's not good.
McKinsey's 15-year research program, surveying 1,034+ executives, puts the transformation success rate at roughly 31%. BCG's global analysis lands at 25–30%. These use "met or exceeded expectations on all dimensions" as the success criterion. Bain's 2024 analysis of 24,000+ transformation initiatives — the largest dataset in the field — found 12% achieved their original ambitions.
For lean manufacturing specifically, the Industry Week/Manufacturing Performance Institute census surveyed 433 US plants in 2007. Roughly 70% had adopted lean manufacturing. Only 2% fully achieved their stated objectives. Only 24% achieved significant results. That's from an industry that has been deploying lean for decades with mature methodologies and established practitioner communities.
McKinsey Digital (2018) found digital transformations succeed at just 16% when measured against sustained long-term performance — dropping to 4–11% in traditional industries. McKinsey estimated $900 billion was wasted out of $1.3 trillion invested in digital transformation in a single year.
The precise failure rate is actually unknowable because studies use different definitions of success. If success means "achieved any improvement at all," the failure rate is low — around 5–10%. If success means "achieved the original ambitious targets and sustained them for three years," only 12% qualify. McKinsey found that only 12% of organizations maintained transformation goals for more than three years, with an average 42% of financial benefits lost in later implementation stages.
The honest summary: roughly one in three improvement initiatives achieves meaningful results by any generous definition. Only about one in eight achieves what was originally promised. And almost none sustain improvements over a three-year horizon without specific mechanisms in place to make them stick.
The 70% failure figure has no empirical basis. The actual data — from studies that checked — suggests it's worse. The gap between what gets promised and what gets delivered is structural, not accidental.
Where Projects Actually Die
The most counterintuitive finding in the academic literature on process improvement failure comes from a 2020 study by Antony, Lizarelli, and Fernandes in IEEE Transactions on Engineering Management, surveying 201 Lean Six Sigma experts globally. Lean Six Sigma projects had their highest termination rates in the Measure and Analyze phases of DMAIC — not during Implementation.
This means projects die most often when confronting data reality, before the real improvement work begins. Teams launch initiatives, begin measuring the baseline, encounter data that's messier than expected or reality that's more complicated than the problem statement assumed, and the initiative collapses at that point rather than at execution.
This finding corroborates what practitioners observe: organizations are not failing at building solutions. They're failing at understanding problems. The same pattern appears in the broader project failure literature — 37% of project failures trace to unclear objectives (PMI), and improvement projects are no exception. You cannot measure progress against a target you haven't defined, and you cannot define a credible target without the data work that most organizations want to skip.
Bader et al. (2024) conducted a systematic review of 49 papers covering Kaizen, Lean, Six Sigma, and Agile, identifying 39 distinct failure factors. The top five by frequency of citation across all studies:
- Resistance to cultural change
- Insufficient support from top management
- Inadequate training and education
- Poor communication
- Lack of resources
None of these are technical. None of them are methodology failures. They're all organizational. The methodology works. The organization resists it or fails to support it, and the project dies.
Antony et al.'s global expert survey ranked the root causes similarly: lack of top management commitment first, resistance to change second, inadequate rewards and recognition third. Notably, the rankings differed by organizational level — Master Black Belts perceived different primary causes than Green Belts. The failure landscape looks different depending on where in the organization you're standing.
The Sponsorship Problem
Across virtually every study in this domain, one factor ranks first or second in determining whether improvement initiatives succeed: active, visible sponsorship from senior leadership.
Prosci has run 12 benchmarking studies on change management since 1998, surveying over 10,800 professionals globally. In every study, active and visible sponsorship was the number one contributor to success — beating the second-ranked factor by a 3:1 margin. Projects with extremely effective sponsors met their objectives 79% of the time. Projects with extremely ineffective sponsors met them 27% of the time.
McKinsey's data shows organizations where leadership clearly defined roles and communicated progress were 8x more likely to succeed at transformation. The Standish Group ranked lack of sponsor involvement as the number one reason for project failure. PMI found that in organizations where project management is not actively valued by leadership, roughly half of all projects fail.
And yet: 52% of change practitioners surveyed by Prosci said their sponsors did not adequately understand their role. Roughly half of teams rate their sponsor's effectiveness as poor to fair. The factor most consistently associated with success is the one most consistently absent.
The sponsorship problem compounds at the middle management level. Prosci data identifies mid-level managers as the most resistant group in change initiatives, with 43% of practitioners naming them as the primary resistance layer. The five drivers: fear of losing power or relevance, role overload, insufficient change-leadership skills, misaligned incentives, and not being involved in planning.
A 2025 longitudinal case study in M@n@gement documented a medium-sized firm where middle managers escalated from private resistance to collective, overt resistance over 32 months — ultimately forcing the CEO to reverse an entire organizational redesign effort. The resistance was rational: they were being asked to abolish their own roles. The project had launched without adequately engaging the people most affected by it, and they found ways to defeat it.
Wang et al. (2025, Journal of Applied Behavioral Science) studied 242 middle managers and found role overload is positively related to resistance to change, mediated by workplace anxiety. McKinsey research shows middle managers spend roughly 35% of their time on administrative tasks during transformations, leaving limited bandwidth for change leadership. They're being asked to drive change while being simultaneously overwhelmed by the administrative burden the change creates.
The Change Management Investment Gap
Prosci's benchmarking data on change management quality and outcomes is the clearest quantitative case in the field for treating change management as an investment rather than an administrative function:
- Excellent change management: 88% of projects meet or exceed objectives
- Good change management: 73%
- Fair change management: 39%
- Poor change management: 13%
That's a 7x difference in outcomes between excellent and poor change management. Projects with excellent change management are nearly 5x more likely to stay on schedule. Without any change management: 16% of projects deliver on time, 16% meet their objectives.
AMR Research found that successful implementations typically spend 10–15% of their project budget on organizational change management. Culture Partners analysis puts the ROI on that investment at 3:1 to 7:1. Technology implementations with structured change management show 95% adoption rates versus 35% without it.
Despite this, most organizations dramatically underinvest. The conventional approach allocates change management to two endpoints — executive alignment at the start and frontline adoption training at the end — and assumes the middle will follow. That assumption is consistently wrong. The people in the middle are the ones who will either make or break daily adherence to the new process, and they're the ones most likely to feel unheard during the design phase.
The Change Fatigue Problem
Every failed initiative makes the next one harder. This isn't a soft observation — it's documented in academic research.
De Vries (Radboud University, Public Money & Management, 2021) found that each new reorganization has a lower probability of success than the prior one, independent of the quality of the new effort. Change fatigue creates resistance mediated by uncertainty and workload. Crucially, this effect was NOT moderated by perceived success of prior reorganizations, by employee participation in planning, or by leadership quality. It was only marginally reduced by communication satisfaction. The damage accumulates and is largely irreversible through conventional management techniques.
The quantitative context for this: the average employee experienced 10 planned enterprise changes in 2022, up from 2 in 2016 (Gartner). Employee willingness to support change initiatives dropped from 74% in 2016 to 38% in 2022. Employees' ability to cope with change is at roughly 50% of pre-pandemic levels.
Academic researchers call the endpoint of this accumulation "organizational cynicism" — the belief that problems are solvable and improvements are possible, but change efforts will fail because of the inherent incompetence of the system. A 2025 meta-analysis of 22 studies with 7,331 participants found the effect size between organizational cynicism and counterproductive work behavior was 0.482 — a moderate to strong relationship. Cynicism doesn't just reduce engagement. It produces active resistance behaviors.
The practitioner literature names this "BOHICA syndrome" — documented in academic literature as the response of employees who have watched enough improvement initiatives fail to assume the next one will too. The self-fulfilling dimension is real: the belief that failure is inevitable becomes the behavior that makes it inevitable.
The Sustainability Problem
Even initiatives that succeed initially rarely sustain their results. McKinsey (2023) found only 12% of organizations maintained transformation goals for more than three years. A 2022 study of 500 organizations found 68% experienced significant regression in process improvements within the first year after key personnel departed, with an average 43% loss of gains.
A logistics company case study captures the pattern precisely. After implementing route optimization software, fuel costs dropped 18% and on-time delivery rose from 87% to 96%. The company failed to incorporate the new metrics into performance evaluations. Compliance dropped from 95% to 34% within seven months. All improvements vanished.
This is the structural failure McKinsey identifies as the final stage: "Organizations often fail to sustain the impact they've achieved. Performance disciplines end with the transformation effort. Incentives and budgets are not fully aligned with new objectives." The improvement was real. Nobody changed the measurement system to make the new behavior the default. When the initiative ended, behavior reverted.
The documentation problem is embedded in sustainability failures. A 2022 study found 68% of organizations experienced significant regression after key personnel departed. The knowledge that made the improvement work — which steps had changed, why they changed, what indicators to watch — lived in people rather than in documented systems. When those people left, the improvement left with them.
The Gap Between Assumed and Earned Buy-In
Bain's 2026 research produced the most important single data point in this entire field for any operations leader planning an improvement initiative: 88% of leaders are confident their reorganization will deliver the intended results. Only 36% of employees agree. That's a 52-percentage-point gap between what leadership believes it has achieved in terms of organizational alignment and what the organization actually has.
This gap explains why initiatives that look successful from the boardroom fail at the operational level. The executives who approved the initiative, defined its goals, and signed the business case are confident. The people who will execute it every day are skeptical at best, cynical at worst, and actively resistant at a rate that's documented by both the Bain data and the Prosci practitioner surveys.
MIT Sloan research compounds this: only 28% of executives and middle managers responsible for executing strategy could list three of their company's top five strategic priorities. A third couldn't name a single one. Only 13% of frontline supervisors could list three. If the people accountable for improvement work don't know what the strategic priorities are, alignment is structurally impossible regardless of the quality of the improvement methodology.
Projects with stakeholder mapping in the planning phase report a 30% higher success rate (PMI). Projects with proactive stakeholder engagement are 40% more likely to deliver on time and within budget (Gartner, 2023). These aren't large investments of time or money. They're investments in understanding who needs to change, what they're afraid of, and what they stand to gain or lose. Most improvement projects don't make them.
What the Research Suggests Doing Differently
The evidence across all of these studies converges on a consistent pattern: process improvement initiatives fail before they start — in the design and alignment work that either happens or doesn't happen before the first project task is assigned.
The pre-launch work that the research supports:
Define success in concrete, measurable terms before starting. Not "improve quality" but "reduce defect rate from 4.2% to below 1.5% by Q3, measured at the final inspection station." The Antony et al. finding — that projects terminate most often in the Measure and Analyze phases — tells you that vague success definitions create ambiguity that becomes fatal when you hit the first measurement challenge. This baselining work is the same regardless of whether the improvement involves AI or not.
Identify and actively engage the middle management layer. This is where improvement initiatives most commonly die, and it's where planning attention is most often absent. Middle managers need to understand specifically how the change affects their role, what they're being asked to do differently, and what they'll be measured on after it's implemented. Engaging them in design rather than notifying them of outcomes changes the resistance profile of the initiative.
Name a sponsor with actual authority and a defined role. Not a sponsor who signed the business case and attends quarterly reviews. A sponsor who is actively visible to frontline teams, who removes blockers when they surface, and who has defined accountability for the initiative's outcomes. Prosci's data on the 3:1 gap between sponsorship and every other success factor is the clearest single finding in 25 years of change management research.
Build measurement into performance evaluation before the initiative ends. The logistics company case — 95% compliance to 34% compliance in seven months because metrics weren't embedded in evaluations — is not unusual. It's the default outcome when improvement work ends without changing the accountability structures that determine daily behavior. The improvement has to be measured, reported, and tied to consequences for it to persist.
Treat improvement initiatives as ongoing rather than one-time projects. McKinsey's finding that only 12% of organizations sustain transformation goals for three-plus years tells you that project structures don't produce durable change. Operations improvements require a continuous review cycle, a feedback mechanism that surfaces regression before it becomes permanent, and someone whose ongoing job is to watch the metrics rather than declare victory and move on.
None of this is methodology. It's organizational infrastructure. The improvement methodology — Lean, Six Sigma, Agile, whatever the team has chosen — works when the infrastructure is in place. It fails when the infrastructure isn't. The research is consistent on this across two decades and dozens of studies. The organizations generating durable improvement results built the infrastructure first.
Planning an improvement initiative?
NSSG's pre-implementation assessment maps your organizational readiness against the pre-launch factors that determine whether improvement work succeeds — stakeholder alignment, baseline measurement, sponsorship, and change infrastructure.
Sources
- Hammer, M. & Champy, J. — "Reengineering the Corporation" (1993) — origin of the 70% figure
- Beer, M. & Nohria, N. — "Cracking the Code of Change," Harvard Business Review (May–June 2000)
- Hughes, M. — "Do 70 Per Cent of All Organizational Change Initiatives Really Fail?" Journal of Change Management, Vol. 11, No. 4 (2011)
- Barends, E., Janssen, B., ten Have, W. & ten Have, S. — Meta-analysis of 563 change studies (2013)
- McKinsey Global Survey — "Transformations that work" (2021); digital transformation data (2018)
- BCG — "Flipping the Odds of Digital Transformation Success" (2020); global analysis (2024)
- Bain & Company — "Transformation Requires Both Bold Action and Discipline" (2024); employee confidence gap (2026)
- Industry Week / Manufacturing Performance Institute — "Census of Manufacturers" (2007)
- Antony, J., Lizarelli, F.L. & Fernandes, M.M. — "A global study on Lean Six Sigma project termination causes," IEEE Transactions on Engineering Management (2020)
- Bader, A. et al. — Systematic literature review of 49 process improvement papers, International Journal of Lean Six Sigma (2024)
- Prosci — "Best Practices in Change Management" 12th Edition (2023); sponsorship benchmarking data
- Wang, T. et al. — "Role Overload, Workplace Anxiety, and Resistance to Change: Evidence from Middle Managers," Journal of Applied Behavioral Science (2025)
- De Vries, M. — "Why Reorganizations Often Fail," Public Money & Management (2021)
- Bourlier-Bargues, F., Valiorgue, B. & Islam, G. — Longitudinal case study of middle manager resistance, M@n@gement (2025)
- MIT Sloan School of Management — Research on strategy knowledge gaps among managers
- Gartner — Change saturation data; employee change support statistics (2022–2023)
- PMI — Project failure causes; stakeholder engagement impact data (2025)
- Lean 6 Sigma Hub — Sustainability regression study, 500 organizations (2022)
- Samson, D. et al. — "Lean Management's Paradox of Low Success Rates," SAGE Journals (2026)
- Gallup — "State of the Global Workplace: 2024 Report"