TL;DR : La plupart des migrations d'Oracle vers PostgreSQL qui échouent ne le font pas parce que PostgreSQL n'était pas prêt.
Ils échouent parce que le projet n'était pas prêt.
Les cinq schémas ci-dessous se répètent dans les organisations et les secteurs d'activité parce que les équipes font les mêmes suppositions : que les outils gèrent plus qu'ils ne le font, que le volume de PL/SQL est plus petit qu'il ne l'est, et qu'un plan de retour arrière peut être improvisé à 2 heures du matin.
J'ai vu des projets de migration échouer dans tous les sens.
Dépassements budgétaires où l'estimation initiale était erronée d'un facteur de huit.
Des nuits de mise en production qui se sont terminées par une opération de récupération de trois jours.
Systèmes de production où six mois d'horodatages sont devenus silencieusement minuit.
Aucun de ceux-ci n'a été causé par PostgreSQL.
Aucun d'entre eux n'a été causé par les outils de migration.
Chacun d'entre eux a été causé par une hypothèse de planification qui s'est avérée erronée.
Voici cinq des modèles d'échec les plus courants - anonymes, mais représentatifs de ce qui se passe lorsque la préparation n'est pas à la hauteur de la complexité du travail.
Table des matières
Why Do Oracle to PostgreSQL Migrations Fail?
Most migration failures trace to three root causes: underestimated PL/SQL volume, Oracle-specific dependencies in application code that nobody audited before the project started, and cutover plans that assumed the worst case wouldn’t happen.
The technology works.
The planning is where it goes wrong.
Failure 1: The Team That Skipped the Assessment
Skipping the pre-migration assessment does not save time — it moves the time to a more expensive part of the project.
A team that discovers mid-migration that it has 200 stored procedures instead of 20 has not avoided assessment effort; it has paid for it at sprint-change rates while a deadline is already moving.
A telecoms company ran ora2pg directly on their schema without generating the assessment report first.
The migration tool itself provides a complexity rating and object count — they skipped it because they were confident the schema was simple.
Three weeks in, the team found 200+ stored procedures inside Oracle packages.
The assessment report would have shown this in an hour.
The original project timeline was four weeks.
The final timeline was fourteen.
The assessment report (ora2pg -t SHOW_REPORT) takes less than a day to run and review.
It is the document that sets the budget, the timeline, and the team size.
Running a migration without it is equivalent to quoting a construction project without a survey.
Failure 2: The PL/SQL That Nobody Counted
The ora2pg complexity rating is a useful starting point, but it is not a workload estimate.
A schema rated “low complexity” can still contain thousands of lines of PL/SQL if the package structure is deep — because ora2pg counts packages as single objects regardless of how many procedures and functions they contain.
A financial services firm used the ora2pg complexity rating to estimate migration effort and presented it to the board as the basis for the budget.
The rating said low complexity.
Nobody expanded the report to object-level detail.
Nobody counted the individual procedures inside each package.
The schema had twelve packages.
The twelve packages contained 140 individual stored procedures and functions.
Actual porting effort to PL/pgSQL came in at eight times the original estimate.
The fix is straightforward: always drill into the ora2pg report at the object level.
Count individual procedures inside packages, not just the packages themselves.
Then add 30% for testing each ported unit.
Failure 3: The DATE Columns That Silently Lost Time
This is the most common silent data loss pattern in Oracle to PostgreSQL migrations.
Oracle DATE stores both date and time. PostgreSQL DATE stores date only.
When Oracle DATE is mapped to PostgreSQL DATE — which is the default in some configurations — the time component of every value is silently discarded. No error. No warning. The data loads cleanly and the time is gone.
A retail company migrated their booking system.
The application tested perfectly.
Unit tests passed.
The UAT team signed off.
The test data had been generated without time components — every date was midnight by coincidence.
The production system had two years of booking records with real timestamps.
They went live on a Saturday.
By Monday, every historical booking showed midnight as the time.
Appointment data going back two years had to be restored from the Oracle backup and revalidated.
The fix is one line in the ora2pg configuration: MODIFY_TYPE date TIMESTAMP.
Apply it without exception to every DATE column in the schema.
It costs nothing and prevents this failure entirely.
About to start a migration assessment and not sure what you’re walking into?
I offer a fixed-fee assessment that reviews your schema complexity, PL/SQL volume, and application SQL dependencies — and delivers a written risk register before any migration work starts.
Voir ce que couvre l'évaluation
Failure 4: The Application Code Nobody Read
The database migration is the visible part of the project.
The application changes are the part that derails it.
Oracle SQL dialect — ROWNUM, DE DUEL, NVL(), (+) outer joins, CONNECT BY, SYSDATE — does not run on PostgreSQL.
If the application codebase has not been searched for these patterns before the migration starts, the scope of the application change work is unknown.
Unknown scope is the fastest way to miss a deadline.
A logistics company migrated a core operations database.
The migration itself completed cleanly.
Row counts matched.
Sequence values were reset correctly.
The cutover window closed on time.
The application refused to start.
The codebase contained 140 occurrences of Oracle dialect SQL across eleven different services.
None of this had been identified before the cutover date.
The team had assumed the application layer used standard ANSI SQL.
It did not.
Remediation took three weeks of application development work that had not been budgeted.
The fix: before writing a single line of migration script, search the application codebase for Oracle-specific SQL patterns.
The search takes hours.
Finding the results in production takes weeks.
Failure 5: The Cutover With No Rollback Plan
“We’ll roll back if we need to” is not a rollback plan.
A rollback plan is a written document that specifies exactly what reverting to Oracle requires, how long it takes, who executes each step, and what the data state will be at the point of rollback.
If that document does not exist before the cutover window opens, there is no rollback — there is only an improvised recovery under pressure.
A healthcare organisation went live on a Sunday evening.
Performance problems appeared at 2am.
The team made the decision to roll back.
The Oracle environment had been partially decommissioned three days earlier to reclaim infrastructure costs.
The DBA who had done the decommission was not in the cutover team.
Nobody had documented what had been removed.
Recovery took three days.
The organisation operated on manual fallback procedures throughout.
The lesson is not that the cutover should have been delayed.
The lesson is that the rollback plan must be written, tested in staging, and the Oracle environment must remain fully intact — not partially, not mostly — until the new system has been signed off in production.
Decommission Oracle after sign-off, not before go-live.
Foire aux questions
What is the most common reason Oracle to PostgreSQL migrations fail?
Underestimated PL/SQL volume is the most common cause of budget and timeline failure.
Teams assess the number of database objects but do not count the individual procedures inside packages.
The second most common cause is undiscovered Oracle dialect SQL in application code that only surfaces at cutover.
Can you recover from a failed Oracle to PostgreSQL migration?
Yes — if the Oracle environment is still intact.
The critical prerequisite is keeping the source Oracle system fully operational until the migration has been signed off in production.
If Oracle has been partially decommissioned before sign-off, recovery becomes a restoration exercise rather than a rollback, which is significantly more expensive and slower.
How do you avoid going over budget on a migration?
Run the full ora2pg assessment report before setting a budget.
Expand it to object-level detail and count individual procedures inside packages.
Search the application codebase for Oracle dialect SQL before scoping the application change work.
Budget at least 30% of the project effort for testing.
Every migration that has gone significantly over budget skipped at least one of these steps.
What is the single most important thing to do before starting a migration?
Run the pre-migration assessment.
The ora2pg assessment report (ora2pg -t SHOW_REPORT) gives you object counts, a complexity rating, and an estimate of migration effort.
It takes less than a day and is the only reliable basis for a project budget and timeline.
Every other planning decision flows from it.
En résumé
These five failures are not unusual.
They are the default outcome when the preparation does not match the complexity of the work.
The database technology is not the risk.
PostgreSQL is mature, production-ready, and used at scale across banking, telecoms, and healthcare in the EU.
The risk is in the planning: skipped assessments, misread complexity ratings, unmapped data types, unaudited application code, and rollback plans that exist only as intentions.
Every one of these failures was avoidable with a week of proper preparation at the start of the project.
If you are planning a migration and want an independent view of where the risk sits before committing to a timeline or budget, get in touch →
