Oracle to PostgreSQL Migration Best Practices: A Checklist for DBAs and Architects

TL;DR: Most Oracle to PostgreSQL migrations don't fail because of the tools — they fail because of skipped steps.
This checklist covers every phase from pre-migration assessment through cutover: the data type traps that cause silent data loss, the sequence and NULL differences that break applications after go-live, the testing steps that get cut when timelines slip, and the cutover decisions that determine whether you can roll back if something goes wrong.


The Oracle to PostgreSQL migration looked straightforward on paper.

The schema was simple — fifty tables, no partitioning, light PL/SQL.

The team ran ora2pg, the data loaded cleanly, row counts matched.

They went live on a Saturday morning.

By Monday, the support queue was full.

Every timestamp in the system showed midnight.

The Oracle DATE columns had been mapped to PostgreSQL DATE — which stores date only.

Two years of time data, silently discarded.

No error.

No warning.

That one mistake — a wrong default in the migration config — is avoidable.

So is every other failure mode on this list.



What Is the Most Common Cause of Oracle to PostgreSQL Migration Failures?

The most common cause is not technical complexity — it is unchecked assumptions.

Teams assume PostgreSQL handles data types, NULL semantics, and SQL syntax the same way Oracle does.

It does not.

The differences are small enough to miss during development and large enough to break production.

The checklist below is built from these failure patterns.

Each item exists because someone skipped it and paid for it later.


Phase 1: Pre-Migration Assessment

The assessment should cover five areas: Oracle-specific features in use, every DATE column that must be mapped to TIMESTAMP, empty string vs NULL usage in the application, a full count of PL/SQL objects, and a search for Oracle dialect SQL in the codebase.
Miss any of these and you will find the gap mid-project, when it is expensive to fix.

The assessment phase determines the scope and cost of everything that follows.
Skipping it is the single fastest way to exceed budget and miss the go-live date.

Audit Oracle-specific features in use
Check for partitioning, Advanced Queuing, materialized view fast refresh, Oracle Text, and Workspace Manager.
Each of these requires an architectural decision — not just a syntax change.
Some have PostgreSQL equivalents.
Some require redesign.
Identify them before the project starts, not after schema conversion is 80% complete.

Map every Oracle DATE column — always to TIMESTAMP
Oracle DATE stores both date and time.
PostgreSQL DATE stores date only.
Mapping Oracle DATE to PostgreSQL DATE silently discards the time component of every value in that column.
There is no error and no warning — the data loads cleanly and the time is gone.
Every Oracle DATE column must be mapped to PostgreSQL TIMESTAMP in the ora2pg configuration, without exception.

Audit empty string vs NULL usage
Oracle treats an empty string ('') as NULL.
PostgreSQL treats them as distinct values.
An application that inserts '' expecting a NULL will behave differently after migration.
Query the source schema for columns with NOT NULL constraints and check application code for empty string inserts before migration starts.

Count and categorise all PL/SQL objects
Run the ora2pg assessment report (ora2pg -t SHOW_REPORT) to get an object count and migration cost estimate.
Record the number of stored procedures, functions, packages, triggers, and package bodies.
This is the primary driver of migration effort and cost.
A schema with no PL/SQL and no Oracle-specific features can migrate in days.
A schema with fifty stored procedures and ten packages takes weeks.

Identify Oracle-specific SQL in application code
Search the application codebase for: ROWNUM, FROM DUAL, NVL(, DECODE(, (+), CONNECT BY, SYSDATE, ADD_MONTHS, TRUNC(, TO_DATE(.
Each occurrence is an application change required after the database migrates.
Knowing the volume before the project starts prevents scope surprises halfway through.

Inventory sequence CACHE values and CURRVAL usage
Oracle sequences default to CACHE 20.
PostgreSQL sequences default to CACHE 1.
High-throughput inserts will behave differently.
Also identify every location in application code that calls CURRVAL — PostgreSQL handles session-local sequence state differently, and the fix requires using RETURNING or lastval().


Phase 2: Schema Conversion

Sequence CACHE
Set a realistic CACHE value in the generated sequence DDL.
For most OLTP workloads, CACHE 20 or higher is appropriate.
PostgreSQL CACHE 1 means a disk write on every sequence call — this is safe but slow at scale.

BOOLEAN columns
Oracle has no native BOOLEAN data type.
Applications typically store boolean values as CHAR(1) ('Y'/'N') or NUMBER(1) (1/0).
PostgreSQL has a native BOOLEAN type.
Use ora2pg's MODIFY_TYPE directive to convert these columns to BOOLEAN rather than carrying the Oracle workaround into the new database.
Update application code to use true/false instead of 'Y'/'N' or 1/0.

Identifier case
Oracle stores unquoted identifiers in uppercase.
PostgreSQL stores unquoted identifiers in lowercase.
Never quote identifiers in PostgreSQL unless the name contains special characters or mixed case that must be preserved.
Quoted identifiers in PostgreSQL are case-sensitive — "Employee" and employee are different objects.
The safest approach is to let ora2pg convert everything to lowercase unquoted identifiers and update application code to match.

Partitioning
Oracle list, range, and hash partitioning map directly to PostgreSQL declarative partitioning.
Oracle interval partitioning (automatic range partition creation) has no direct equivalent — implement it with a range partition and a scheduled job to add future partitions.
Oracle composite partitioning (e.g. range-hash) requires manual design in PostgreSQL.
Identify the partitioning strategy and test it in PostgreSQL before migrating data.

Index types
Oracle Bitmap indexes have no equivalent in PostgreSQL.
Identify every Bitmap index in the source schema and decide whether a standard B-tree index, a partial index, or a BRIN index achieves the same query plan.
Do not skip this step — missing indexes are one of the most common causes of post-migration performance problems.


Does Oracle DATE Really Need to Be Mapped to TIMESTAMP?

Yes — always.

Oracle DATE stores both date and time.

PostgreSQL DATE stores date only.

Mapping Oracle DATE to PostgreSQL DATE silently discards the time component of every value — no error, no warning.
Set MODIFY_TYPE date TIMESTAMP in the ora2pg configuration and apply it to every DATE column in the schema.


Phase 3: Data Migration

Disable FK trigger checks before loading, load in order (DDL → data → sequences), reset every sequence after load using setval(), validate row counts for every table, and spot-check data in DATE, numeric, and type-converted columns.
Missing any of these steps can cause silent data corruption or insert failures after go-live.

Disable FK trigger checks before loading
ora2pg exports table data in alphabetical order, not in foreign key dependency order.
A child table will often appear before its parent in the data file, causing FK violations on load.
Load data inside a session with SET session_replication_role = replica to disable FK trigger checks, then restore DEFAULT immediately after the load completes.

Load in the correct sequence
Load in this order: tables DDL → data → sequences.
Tables must exist before data can be loaded.
Data must be loaded before sequence values are reset — COPY bypasses sequences entirely, leaving them at their START value regardless of what IDs were loaded.

Reset sequences after data load
After loading data, every sequence is still at its START value.
The next INSERT will attempt to use a sequence value that already exists in the table as a loaded ID.
Run setval() for every sequence to advance it past the highest loaded ID before the application goes live.
The ora2pg SEQUENCE export generates this script — run it as the final step.

Validate row counts for every table
After load, compare row counts between Oracle and PostgreSQL for every table.
A discrepancy means a COPY error was silently skipped.
Check the psql output for ERROR lines during the load — COPY reports errors per-row and continues by default.

Spot-check data in high-risk columns
After validating row counts, manually check values in DATE/TIMESTAMP columns, numeric columns with high precision, and any columns that were subject to type conversion.
Automated row counts confirm completeness; spot checks confirm correctness.

Mid-migration and not sure if you've covered everything?
I offer a fixed-fee migration assessment that reviews your schema, config, and cutover plan and delivers a written risk register before go-live.
See what the assessment covers →

Phase 4: Application Changes

Every Oracle dialect SQL construct must be replaced: ROWNUM → LIMIT, FROM DUAL removed, NVL → COALESCE, DECODE → CASE, (+) outer joins → ANSI LEFT JOIN, CONNECT BY → WITH RECURSIVE, SYSDATE → NOW(), and CURRVAL replaced with lastval() or RETURNING.
PostgreSQL will reject Oracle dialect SQL at runtime — these are not optional.

OraclePostgreSQL equivalent
ROWNUM <= nLIMIT n
FETCH FIRST n ROWS ONLYFETCH FIRST n ROWS ONLY (ANSI, works in both)
FROM DUALRemove — use bare SELECT value
NVL(a, b)COALESCE(a, b)
DECODE(col, v1, r1, v2, r2, default)CASE col WHEN v1 THEN r1 WHEN v2 THEN r2 ELSE default END
col1 (+) = col2col1 LEFT JOIN col2
CONNECT BY PRIORWITH RECURSIVE
SYSDATENOW() or CURRENT_TIMESTAMP
CURRVALlastval() or RETURNING id
ADD_MONTHS(d, n)d + INTERVAL 'n months'
TRUNC(date)DATE_TRUNC('day', date)

Beyond syntax, check for implicit type conversions Oracle accepts that PostgreSQL rejects.
Oracle will silently cast '123' to a number in a numeric comparison.
PostgreSQL will throw a type mismatch error.
Identify these in the application and add explicit casts.


Phase 5: Testing

Run regression tests against the migrated database
Every function that touches the migrated schema must be tested against PostgreSQL before cutover.
If the application has an existing test suite, run it in full.
If it does not, this is the phase where the absence of test coverage becomes expensive — manual functional testing is the only alternative.

Establish a performance baseline before cutover
Run the same representative query set against both Oracle and PostgreSQL.
Collect execution plans from PostgreSQL using EXPLAIN (ANALYZE, BUFFERS).
PostgreSQL's query planner makes different decisions than Oracle's.
Missing indexes, stale statistics, and wrong join strategies all show up here — before they show up in production.

Test under realistic load
A query that performs well in isolation may behave differently under concurrent load.
Run a load test against PostgreSQL using real or representative data volumes before making a go/no-go decision.
pg_bench provides basic throughput testing; application-level load testing against the migrated schema is preferable if the tooling exists.

Validate NULL handling edge cases
Test every query that filters on NULL, inserts NULL, or compares nullable columns.
NULL semantics are consistent between Oracle and PostgreSQL in standard SQL, but application code written to Oracle's empty-string-as-NULL behaviour will produce wrong results.

Test sequence-generated IDs under concurrent inserts
Run concurrent inserts that rely on sequence-generated primary keys.
Verify there are no collisions and that sequence gaps are within acceptable bounds for the application.


Phase 6: Cutover

Decide the cutover strategy before technical work starts
There are two choices: maintenance-window cutover (application goes offline during switchover) or zero-downtime cutover (logical replication keeps PostgreSQL in sync during transition).
Maintenance-window cutover is simpler, cheaper, and more reliable.
For most internal or non-consumer-facing systems, a two-hour maintenance window at off-peak hours is acceptable.
Zero-downtime cutover adds significant complexity and cost — only use it when the SLA genuinely requires it.

Write a rollback plan
Define exactly what reverting to Oracle requires: how long it takes, who executes it, and what the data state will be at that point.
A rollback plan that has not been written does not exist.
If something goes wrong after cutover and the team has to improvise a rollback under pressure, the outcome will be worse than if the plan had been written and rehearsed.

Define go/no-go criteria in writing
Before cutover night, agree in writing on the conditions under which the cutover proceeds and the conditions under which it rolls back.
Go/no-go decisions made under time pressure in the middle of a maintenance window are unreliable.

Dry-run the cutover in staging
Execute the full cutover procedure — including the rollback — in a staging environment at least once before production.
The dry run will surface procedural gaps that the technical preparation missed.

Comms plan
Identify who needs to be notified, at what stage, in what channel, and who has the authority to approve rollback.
Document it before the cutover window opens.


Frequently Asked Questions

What is the most common mistake in an Oracle to PostgreSQL migration?

Mapping Oracle DATE columns to PostgreSQL DATE instead of TIMESTAMP.
Oracle DATE stores both date and time.
PostgreSQL DATE stores date only.
The migration completes without errors — and every time value in those columns is silently set to midnight.
Always map Oracle DATE to PostgreSQL TIMESTAMP.

Do I need to run performance tests before cutover?

Yes — always.
PostgreSQL's query planner makes different decisions than Oracle's.
Indexes that existed in Oracle may not have been created in PostgreSQL.
A query that ran in 200ms on Oracle may run in 20 seconds on PostgreSQL without the right index.
Establish a performance baseline before cutover so there are no surprises on go-live day.

What is the safest cutover approach for a production Oracle database?

A maintenance-window cutover — where the application goes offline briefly during switchover — is the safest option for most systems.
It eliminates the complexity of keeping two databases in sync and gives the team a clean, deterministic switchover point.
Zero-downtime cutover via logical replication is possible but adds significant cost and risk.
Use it only when the SLA genuinely cannot tolerate a maintenance window.

Can I roll back after cutover if something goes wrong?

Yes, but only if the rollback plan was written before the cutover window opened.
A rollback requires the Oracle environment to still be intact, the data state to be understood, and the procedure to have been rehearsed.
If none of these conditions exist, rolling back becomes an improvised recovery under pressure.
Always write and rehearse the rollback procedure before production cutover.

Does PostgreSQL handle NULL the same way Oracle does?

For standard SQL NULL semantics — yes.
The difference is in empty strings: Oracle treats '' as NULL, PostgreSQL treats them as distinct values.
Application code that inserts '' expecting it to satisfy a NOT NULL constraint, or that queries for NULL expecting to match empty strings, will behave differently on PostgreSQL.
Audit for this pattern during the assessment phase.


In Summary

An Oracle to PostgreSQL migration succeeds or fails in the preparation.
The tools are free, well-documented, and reliable.
The failures come from unchecked assumptions: Oracle DATE mapped to the wrong type, sequences left at their START value, application SQL that Oracle accepted silently and PostgreSQL rejects explicitly.

This checklist covers the failure modes that appear most often in real migrations.
Work through it phase by phase — assessment, schema conversion, data migration, application changes, testing, cutover — and none of these should surprise you on go-live day.

Bookmark this page — and if you need help applying it in production, get in touch →

Leave a Reply

Your email address will not be published. Required fields are marked *