What "Go-Live Ready" Actually Means
Go-live readiness is not "we finished building everything." It's a validated state where five domains — technical, process, data, user, and operations — have been independently confirmed as ready, with evidence. An SAP project is go-live ready when the PMO can demonstrate, with data, that each domain meets its acceptance criteria.
What it doesn't mean: everyone feels optimistic. Feelings don't prevent production incidents. Data does.
The PMO's Role
The PMO is the go-live gatekeeper. Their job is not to do the testing, run the cutover, or train the users. Their job is to define readiness criteria, track progress against those criteria, escalate risks early, and make the go/no-go recommendation with supporting evidence.
A PMO that says "we're ready" without data is abdicating their responsibility. A PMO that says "we're not ready" with clear evidence is doing their job.
The 5 Domains of Go-Live Readiness
1. Technical Readiness
What it means: All custom development objects (RICEFWs) are built, unit tested (FUT), and approved. All interfaces are operational. All enhancements are regression-tested. The technical landscape (DEV → QAS → PRD) is stable and configured.
What good looks like:
- 100% of RICEFWs in "approved complete" status
- All FUT test runs passed
- All interfaces tested end-to-end with production-like data
- No open critical or high severity defects against RICEFW objects
- Code freeze in effect and enforced
Warning signs:
- RICEFWs still in "in build" status two weeks before go-live
- FUT pass rate below 90%
- Interface testing postponed to cutover weekend
- Developers still making changes in QAS
2. Process Readiness
What it means: System Integration Testing (SIT) is complete. All critical business processes have been tested end-to-end across modules. Process dependencies are validated. Edge cases are covered.
What good looks like:
- All critical E2E process paths tested and passed in SIT
- Cross-module integration points validated (e.g., SD order → FI billing → CO profitability)
- SIT defects resolved or deferred with documented business acceptance
- No open critical or high severity defects in SIT
Warning signs:
- SIT still in progress with untested process paths
- High defect retest failure rate (fixes that don't actually fix)
- Cross-module scenarios skipped due to time pressure
- "We'll test that in UAT" used as a justification for skipping SIT scenarios
3. Data Readiness
What it means: All data migration loads have been executed, validated, and reconciled. Conversion objects (the C in RICEFW) are complete. Legacy data mapping is finalized. Opening balances are confirmed.
What good looks like:
- At least 2 full migration dry runs completed successfully
- Data reconciliation reports show < 0.1% variance from source
- Business has signed off on migrated data samples
- Cutover migration script is documented and timed
- Fallback plan exists if migration fails
Warning signs:
- Only one migration dry run completed (and it had errors)
- Reconciliation not performed or "in progress"
- Business hasn't reviewed migrated data
- Migration script not documented — it's "in someone's head"
- No time buffer built into cutover for migration re-runs
4. User Readiness
What it means: User Acceptance Testing (UAT) is complete and signed off. End users have been trained. Training materials are published. Help desk and support channels are established.
What good looks like:
- UAT completed with documented sign-off from each business area
- UAT defects resolved or accepted with documented risk
- Training modules completed by all identified user groups
- Training completion rate above 90%
- Quick reference guides and job aids distributed
Warning signs:
- UAT sign-off obtained verbally but not documented
- Training completion below 70%
- Key user groups haven't touched the system yet
- No help desk plan for go-live week
- Users still logging basic navigation issues in UAT (suggests insufficient training)
5. Operations Readiness
What it means: The Application Management Services (AMS) team is briefed and ready. The cutover runbook is complete. Monitoring is configured. Escalation paths are defined. The hypercare plan is staffed.
What good looks like:
- AMS team has completed knowledge transfer sessions
- Cutover runbook is step-by-step with owners and durations
- System monitoring configured for critical transactions and interfaces
- Hypercare team identified, scheduled, and on-call for first 2-4 weeks
- Rollback plan documented and tested
Warning signs:
- AMS team hasn't been involved until go-live week
- Cutover runbook is a high-level outline, not a detailed script
- No monitoring beyond standard SAP alerts
- Hypercare resources not confirmed
- "Rollback plan" is "we'll figure it out"
The Go/No-Go Meeting
The go/no-go meeting is the single most important meeting of the implementation. It should include:
- Readiness scorecard — each of the 5 domains rated green/amber/red with supporting data
- Open risk register — every unresolved risk with mitigation plan and owner
- Defect summary — total defects by severity, open vs. closed, retest status
- Outstanding items — any incomplete activities with realistic completion dates
- Recommendation — the PMO's clear recommendation: go, no-go, or conditional go with specific conditions
The decision should be data-driven. Every green rating should have evidence behind it. Every amber or red should have a mitigation plan with an owner and a deadline.
A go/no-go meeting without data is just a meeting where the loudest voice wins. Real go-live readiness means the data is available and undeniable — not because someone said so, but because the system shows it.
Cutover Checklist Essentials
The cutover is the execution of the go-live plan. Key elements:
- Cutover schedule — hour-by-hour timeline from system freeze to go-live confirmation
- Data migration execution — final load, reconciliation, business validation
- Technical cutover — transport imports, system configuration switches, interface activation
- Communication plan — who gets notified at each milestone (stakeholders, users, help desk)
- Go-live confirmation criteria — specific checks that confirm the system is operational (e.g., first sales order processed, first payment run completed)
- Rollback trigger criteria — predefined conditions that trigger the rollback plan (e.g., migration variance > 1%, critical interface failure)
Common Go-Live Risks and Mitigations
| Risk | Mitigation |
|---|---|
| Data migration variance exceeds threshold | Build 4-hour buffer for re-run; have pre-validated fallback data set |
| Critical interface fails post-cutover | Test interfaces in PRD before go-live; have manual workaround documented |
| Performance degradation under production load | Conduct load testing in QAS with production volumes before cutover |
| Key users unavailable during hypercare | Confirm hypercare roster 2 weeks before go-live; identify backups |
| Undiscovered defects in production | Pre-configure monitoring for critical transactions; staff hypercare with functional + technical resources |
| Rollback required | Document and rehearse rollback procedure; define clear trigger criteria in advance |
Building the Evidence Base
The PMO's credibility in the go/no-go meeting depends entirely on the quality of their data. If RICEFW status is self-reported in a spreadsheet, the data is unreliable. If test results are scattered across Excel and Jira, the summary is incomplete. If defect trends are manually compiled, the trends are lagging.
The most effective PMOs use a single system of record that provides real-time visibility across all five readiness domains — not because it's convenient, but because it's the only way to make data-driven decisions at the speed a go-live demands.
Axia gives PMOs real-time visibility across every domain of go-live readiness — RICEFW status, test execution progress, defect trends, and approval queues — all in one connected dashboard. Because go-live decisions should be based on data, not status reports.