ERP implementations fail at a remarkable rate. Gartner puts the number at 75% of projects failing to meet their objectives. KPMG found that 62% of implementations exceed their budgets by an average of 27%. Panorama Consulting's research shows 51% of companies experience operational disruptions at go-live.
These are not niche findings. They represent the consistent experience of thousands of organizations across SAP, Oracle, Dynamics 365, NetSuite, and every other major ERP platform.
The reasons are worth understanding carefully — not because failure is inevitable, but because the root causes are specific, recurring, and preventable. Most ERP implementations don't fail because of the technology. They fail because of how the work is managed.
The data: what goes wrong
Before looking at root causes, it helps to understand the scale of the problem.
- Only 8% of S/4HANA migrations complete on schedule, according to Horváth research cited by CIO.com
- 65% of SAP implementations report severe quality deficiencies post-migration
- The average ERP implementation runs 24% over budget
- Operational disruptions at go-live affect more than half of all ERP projects
- The average cost of a failed ERP implementation — including remediation, extended timelines, and operational disruption — runs into millions for mid-market companies and tens of millions for enterprise programs
These numbers span platforms. Oracle, Dynamics, SAP, Workday — the failure rates are consistently high across all of them. This is not a platform problem. It's an implementation management problem.
Root cause 1: Requirements drift
The most common failure pattern starts early: requirements established in fit-gap workshops disappear before they're ever built.
Here's how it happens. During the design phase, a consultant runs a workshop with a business stakeholder. A gap is identified — the standard ERP process doesn't cover a business requirement. The gap is logged in the register, assigned a type and an estimate, and the workshop moves on.
Three months later, in the build phase, the developer picks up that gap. The functional spec has been written, but it references a decision made in the workshop that wasn't fully captured. The consultant who ran the workshop has since rolled off. The business stakeholder's requirement has evolved. The developer builds something — a reasonable interpretation — but it's not quite right.
The defect surfaces in UAT. The root cause is a decision made in a workshop six months ago that was never properly documented and tracked through the lifecycle. Fixing it now takes five times as long as fixing it when the gap was first identified.
Requirements drift is not a data problem. It's a traceability problem. When requirements, gaps, custom development objects, specs, and tests live in separate systems with no explicit connections, context leaks out at every handoff.
Root cause 2: Custom development sprawl
The second most common failure pattern is scope creep concentrated in custom development.
Every gap that can't be resolved through configuration or process change becomes a custom development object — a RICEFW in SAP, a RICE/CEMLI object in Oracle, an extension in Dynamics. The volume of these objects is one of the strongest predictors of project cost, timeline, and risk.
In a well-managed implementation, each gap goes through a structured decision: can the business adapt its process to use the standard system? If yes, no custom development. If no, the custom object is scoped, estimated, and approved by the PMO before any development starts.
In a poorly managed implementation, gaps get logged and development starts before formal approval. Scope assumptions creep. An object estimated at three days of development becomes fifteen. The PMO's view of the custom development workload is always slightly behind the actual state because the register isn't current.
By the time the project reaches system integration testing, total development effort significantly exceeds what was approved. The timeline extends. Budget conversations with the steering committee get uncomfortable. And the pressure to go live on the original date — regardless of actual readiness — creates real quality risk.
Root cause 3: Testing blind spots
Testing is where the accumulated debt from the previous two root causes becomes visible — often too late.
The typical ERP testing stack: unit testing by developers, functional unit testing (FUT) by functional consultants, system integration testing (SIT) by the project team, and user acceptance testing (UAT) by business users. Each phase is supposed to catch different types of issues before they reach the next phase.
In practice: test scripts in a spreadsheet, defects in Jira, and pass/fail status in a third spreadsheet updated manually before each weekly test review meeting.
The problems this creates are structural. Defects raised in Jira often have no explicit link back to the custom development object or business requirement that caused them. When a defect is raised in UAT, tracing its root cause requires manual archaeology — pulling up the gap register, finding the original functional spec, cross-referencing the technical spec, and tracing back to the workshop decision.
Test coverage is almost always worse than it appears. The PMO's test completion percentage is based on scripts that have been executed — not on whether execution actually covered the right scenarios. Gaps in test script design often aren't discovered until users are in the real system after go-live.
Root cause 4: PMO visibility gaps
Every large ERP implementation has a PMO function responsible for governance, risk management, and stakeholder reporting. The PMO's effectiveness depends entirely on the quality of information it has access to.
In most implementations, the PMO builds its weekly status report by asking each workstream lead to update a shared spreadsheet or send a status email on Friday. The PMO then consolidates this manually, reconciles conflicting numbers, and produces a report for the steering committee.
This has two fundamental problems. First, it's slow — by the time the report reaches the steering committee, the information is already several days old. Second, it depends on workstream leads self-reporting accurately, which creates a systematic bias toward optimism. Nobody wants to be the person whose workstream is red.
The result: executive stakeholders consistently receive a rosier picture than the project's actual state. Decisions about timelines, resource allocation, and go-live dates are made on incomplete information. The risks that matter most often don't surface until they've already become problems.
Root cause 5: Institutional knowledge walking out the door
Enterprise ERP programs typically run 18–36 months. Consulting team composition changes throughout — consultants roll off when their module is complete, get reassigned, or move on.
Every time a consultant leaves, institutional knowledge leaves with them. The functional lead who ran twenty fit-gap workshops knows things that aren't written down anywhere. The integration developer who built the most complex interface understands edge cases that aren't in the technical spec.
If the project runs on spreadsheets and document repositories, that knowledge is essentially unrecoverable once the person leaves. The next person who touches that work starts from a lower point than they should.
This problem compounds with time. By the end of a long implementation, the team building later phases often has limited continuity with the team that ran the original fit-gap analysis. Decisions made in month two come back as questions in month eighteen.
What separates implementations that succeed
The implementations that consistently deliver on time and on budget share practices that address each root cause directly.
Full traceability from requirement to test. Every gap links forward to the custom development object, functional spec, technical spec, test scripts, and any defects. Root cause analysis in minutes, not hours.
A single live register. Custom development objects tracked in one system, always current, accessible to the full project team — not five versions of an Excel file across three SharePoint folders.
Approval workflows that create a record. Scope and cost estimates formally approved by the PMO with a captured timestamp and approver — not inferred from email chains.
Test scripts linked to requirements. When a test fails, the defect links back to the gap that created it. Test coverage measured against actual business requirements, not just script execution percentages.
A PMO dashboard with real data. Program health visible in real time, pulled from where the work actually happens — not manually assembled from weekly status emails.
The tooling problem is solvable
ERP implementations don't fail because the technology is bad. SAP, Oracle, Dynamics, and NetSuite are capable platforms that work for thousands of organizations.
They fail because the tooling used to manage the implementation — Excel registers, SharePoint document libraries, Jira defect trackers, manual PMO reports — was never designed for this work. Each tool handles a piece of the problem in isolation. What's missing is the connection: requirement to gap to custom object to spec to test to defect to deployment, all in one place, always current.
Axia gives implementation teams and PMOs a connected platform for requirements tracking, custom development management, guided testing, and real-time program visibility — regardless of which ERP platform you're running.