Rework costs the average manufacturer 2.2% of annual revenue every year. For a $100M company, that's $2.2 million in work that gets done twice. For a $1B company, it's $22 million.

That number sounds like an execution problem. But 70% to 85% of all rework traces back to errors made in the requirements phase — before the first line of code is written, before the first design review, before a single part is ordered.

It's not a building problem. It's a definition problem.

And the definition problem is almost always the same: the team solved the wrong problem. They executed well against a specification that was pointed at a symptom rather than a cause. The work was done correctly. The outcome was wrong anyway.

2.2%
of annual revenue lost to scrap and rework — on average, across manufacturing companies
Ease.io / Cost of Quality research, 2026
70–85%
of all rework costs trace to errors in the requirements phase, before development begins
Standish Group / CISQ
10×
Cost multiplier for fixing a requirement error after tooling vs. catching it during design
Engon Technologies, 2026
29%
of projects fully meet their original business objectives, per the Standish Group's 2020 CHAOS report
Standish Group, 2020

The Right Answer to the Wrong Question

Statisticians define three types of errors. Type I: rejecting something true. Type II: accepting something false. Type III — first articulated by researchers Kimball and Mosteller — is different: giving the right answer to the wrong question.

In an organizational context, a Type III error is a project that executes flawlessly against a misdiagnosed problem. The code ships on time, under budget, bug-free. And then it doesn't move the business metric it was supposed to move because it was solving for the wrong thing.

This is not a rare edge case. The Standish Group has tracked project outcomes across thousands of organizations for decades. Only 29% of projects fully meet their original business objectives. The cause is rarely technical incompetence. It's usually a failure to accurately define what the project was supposed to accomplish before building started.

McKinsey research on AI projects found that companies most often start with a "solution looking for a problem" — they decide on a technology first, then work backwards to justify it. The same pattern shows up in conventional software and engineering projects. Teams frame the problem around the solution they already prefer, which guarantees the diagnosis is at least partially wrong.

You can build exactly what the specification says and still fail completely. The specification is the problem. Nobody checked whether the specification was correct before approving it.

The Cost That Compounds

The financial mechanics of the wrong-problem error follow a predictable pattern known as the 1-10-100 rule. Research from the Construction Industry Institute and NIST shows that the cost to fix an error scales by a factor of ten at each successive project phase.

A requirement error caught during the design phase costs one unit to fix — a conversation, a revised document, a changed specification. The same error caught during development costs ten units — rewriting code, reconfiguring systems, re-testing everything that changed. Caught in production, after the system is live: one hundred units. Rework, revalidation, customer impact, and sometimes regulatory exposure on top of it.

In manufacturing, this plays out as Engineering Change Orders. When a design flaw surfaces after tooling has been built, fixing it costs 10 times more than fixing it at the drawing stage. The cost of revalidation alone runs $50,000 to $500,000 per change, depending on the complexity of the part and the regulatory requirements around it. A $5,000 mistake in requirements becomes a half-million dollar problem after the tools are made.

In software, it shows up as technical debt. Requirements that were vague or wrong get "fixed" with patches rather than redesigns. Each patch makes the next one more expensive. Teams report spending 30% to 50% of their total engineering time on rework rather than new development. That's not a productivity failure. That's what you should expect when the definition phase is treated as overhead.

Why Teams Keep Doing This

There are two explanations for why this pattern persists despite the evidence, and both are true simultaneously.

The bias for action. In most organizational environments, teams feel productive when they're building. Discovery, diagnosis, and definition feel slow. They look like overhead to executives who measure progress by output. So teams abbreviate the problem definition phase to demonstrate momentum. The 30 to 60 days of rigorous upfront analysis that would prevent $2.2M in rework gets compressed into a two-hour kickoff meeting.

Research on framing effects shows this isn't just organizational pressure — it's cognitive. When a project is framed as "preventing a loss," teams accept more uncertainty and skip more validation than when it's framed as "protecting a gain." Loss-framed urgency activates risk tolerance, which in this context means skipping the work you should be doing most carefully.

The framing effect on problems. How you describe a problem determines what solutions you'll generate. If a drop in sales is framed as a "product quality issue," the team will build quality improvements. If it's framed as a "distribution problem," they'll build distribution improvements. Only one of those may be true, but whichever framing wins the early conversation shapes every decision that follows.

The framing usually comes from whoever speaks first with the most authority. Senior leaders have a structural advantage in framing, which is exactly backwards — they're typically furthest from the operational reality where the actual problem is visible. Their framing gets accepted without challenge, and the team builds against it for months.

Three Times This Happened at Scale

New Coke (1985) is the canonical case. Coca-Cola was losing market share to Pepsi. Blind taste tests showed consumers preferred the sweeter Pepsi formula. The company spent over $1 million on 200,000 consumer tests developing a new formula that beat Pepsi in taste comparisons.

The formula worked. The product failed catastrophically.

The actual problem wasn't taste. Loyal Coke drinkers had a 40-year emotional and cultural attachment to the brand. Pepsi was winning younger demographics with a generational identity campaign, not with taste alone. When Coca-Cola changed the formula, they attacked the very thing that differentiated them — the original. Consumer revolt followed. Coke Classic was back in three months. The company had solved for the metric (taste preference) and missed the actual problem (brand identity erosion).

Quibi (2020) raised $2 billion on a specific problem diagnosis: mobile viewers needed high-production-value short content they could watch during commutes and other brief windows. The company built custom technology, signed major talent, and launched a premium mobile-first platform.

The problem it diagnosed was already solved. YouTube, TikTok, and Instagram had been providing free short content for years. And Quibi launched in April 2020 — eliminating commutes for most of its potential audience overnight. The company had built a solution to a problem that didn't exist at the scale required to justify $2 billion in investment. It shut down six months after launch.

The Challenger disaster (1986) is the most extreme version. Engineers at Morton Thiokol warned NASA that cold temperatures would compromise the O-ring seals on the shuttle. NASA's managers, under pressure to launch, asked the contractors to "prove it was unsafe to launch." The engineers could not provide absolute statistical proof of failure — which was the wrong standard in conditions that had never been tested before.

The correct question was: "Do we have evidence the seals will hold in these conditions?" The answer to that question was no. But the question asked was different, and because the wrong question was asked, the engineers were unable to stop the launch. The framing of the question killed seven people.

Treating the Cause, Not the Symptom

Root Cause Analysis — specifically the practice of asking "why" repeatedly before accepting an explanation as complete — exists because humans are naturally inclined to stop at the first plausible answer.

The 5 Whys technique, developed at Toyota, forces the diagnostic process past the obvious. A manufacturing line breaks down. Why? The motor overloaded. Why? The bearing seized. Why? It wasn't lubricated. Why? The maintenance schedule wasn't followed. Why? The inventory system failed to trigger a reorder for the lubricant. That last answer is the root cause. Everything above it is a symptom.

If the team stops at "the bearing seized" and orders a new bearing, the line will break down again in the same way for the same reason. If they stop at "the maintenance schedule wasn't followed" and discipline the maintenance team, the actual problem — the inventory system — continues to create the conditions for recurrence.

Research from Fabrico's 2026 analysis of manufacturing RCA outcomes shows that proper root cause analysis reduces problem recurrence below 15%. Teams that fix symptoms rather than causes see 70% of those problems return. The financial math is straightforward: a $5,000 symptom fix that recurs quarterly costs $20,000 per year. A $10,000 root cause fix that doesn't recur is half the cost in year two and continues compounding in the organization's favor indefinitely.

For complex problems with multiple interacting causes, a Fishbone diagram (also called an Ishikawa diagram) maps contributing factors across categories: equipment, process, people, materials, environment, and measurement. This structure prevents teams from collapsing a multi-variable problem into a single explanation, which is what happens under time pressure when the pressure to find an answer is higher than the pressure to find the right answer.

The 20% That Prevents 50% of Problems

MIT Sloan Management Center research on project outcomes found that projects investing at least 20% of their total duration in requirements analysis reduced scope creep by an average of 56%. They were also 68% more likely to complete successfully.

That's the most asymmetric investment available to most project teams, and almost nobody makes it. Teams that compress requirements work to 5% of project duration are essentially choosing to spend the remaining 95% partially solving the wrong problem and then fixing it.

The downstream consequences are consistent. Gartner's analysis of digital projects affected by scope creep shows they exceed their original budget by 45% on average, finish seven months late, and deliver 56% less business value than originally projected. Those three numbers are not independent — they're all caused by the same failure to define the problem accurately before committing resources.

IBM's Enterprise Design Thinking program, which formalizes the process of problem definition before solution development, produced a 301% ROI in Forrester's analysis. That return came primarily from reduced rework and faster time to market — not from better technology. The technology was the same. The discipline of defining the problem first changed what the technology was pointed at.

How to Change the Pattern

The Double Diamond framework from the UK Design Council is the clearest structural fix for teams that routinely skip problem definition. It forces two sequential phases before any solution work begins.

The first diamond is entirely devoted to finding the right problem. Teams explore broadly (Discover), gathering data, talking to the people actually experiencing the problem, and resisting the temptation to converge on an explanation too quickly. Only after that broad exploration do they narrow (Define) to the most precise problem statement the data supports.

The second diamond is where solutions get built. Teams explore multiple options (Develop) before committing to one (Deliver). This sequencing matters: you cannot do good solution work on a vague problem statement, and you cannot do good problem definition work if you've already committed to a solution.

The most common violation of this structure is the skipped first diamond. Teams start at "Develop" because they already have a solution in mind and they want to start building. The Double Diamond's value is precisely that it forces the first diamond to be completed before the second can begin. Without that forcing function, the bias for action wins every time.

Two other practices that change outcomes:

Tie the problem statement to a financial metric. Vague problem statements produce vague solutions. "Sales are declining" produces a hundred possible interventions. "Our average revenue per customer dropped 14% in the northeast region over the past two quarters, driven by a 22% increase in churn among accounts over $50K" points to a specific investigation. If a team cannot express the problem in terms that connect to a financial metric, they haven't finished defining the problem yet.

Put an independent person in the room. McKinsey's work on capital project failures recommends truly independent challenge teams whose job is to interrogate the problem definition before construction begins. These people have no stake in the project's success — which gives them the freedom to point out that the specification doesn't address the actual business need. Most teams don't have this. The people reviewing the problem definition are the same people who defined it, which means the review catches nothing.

The teams generating the most consistent project results share one habit: they treat problem definition as a deliverable, not a conversation. It gets documented. It gets reviewed by people who weren't involved in writing it. It gets revised when the data says it should be. And the project doesn't move to solution development until there's agreement that the definition is correct.

That discipline costs time upfront. It saves a multiple of that time — and of the money — later. The 2.2% of revenue going to rework is not the cost of complex problems. It's the cost of not defining problems correctly before building starts.

Not sure you're solving the right problem?

The AI Readiness Assessment starts with problem definition — mapping your business challenges to root causes and financial metrics before recommending any solution path.

Start With the Problem

Sources

  1. Ease.io — "Manufacturing Scrap Rates & Cost of Quality and OEE" (January 2026)
  2. Engon Technologies — "Manufacturing Rework Cost: 7 Risks in US & Europe" (March 2026)
  3. Standish Group — "CHAOS Report 2020"
  4. Consortium for Information & Software Quality (CISQ) — "Cost of Poor Software Quality in the US" (2022)
  5. Fabrico.io — "5 Root Cause Analysis Techniques for Manufacturing (2026 Guide)" (February 2026)
  6. Tulip.co — "2026 Manufacturing Trends: The 4 Tech Shifts Leaders Need to Act On" (February 2026)
  7. Gitnux — "Manufacturing Downtime Statistics: Market Data Report 2026" (2026)
  8. MIT Sloan Management Center — Research on requirements analysis and project outcomes
  9. Gartner — Research on scope creep financial impact (2024)
  10. PMI — "Pulse of the Profession 2025: Boosting Business Acumen" (2025)
  11. Forrester — "Design Thinking Can Deliver an ROI of 85% or Greater" (2025)
  12. McKinsey — "Bias Busters: When the Question — Not the Answer — Is the Mistake" (2024)
  13. McKinsey — "Don't Cancel or Coddle At-Risk Capital Projects — Challenge Them" (2024)
  14. UK Design Council — Double Diamond Design Thinking Framework
  15. Tversky, A. & Kahneman, D. — "The Framing of Decisions and the Psychology of Choice" (1981)
  16. Kimball, A.W. — "Errors of the Third Kind in Statistical Consulting" (1957)
  17. The Branding Journal — "New Coke: A Classic Branding Case Study" (2024)
  18. Smartware Advisors — "Case Study: The Rise and Fall of Quibi and Lessons Learned" (2024)