TL;DR: Debezium reads Oracle's redo logs through LogMiner, publishes every change as a Kafka event, and a JDBC sink connector applies those events to PostgreSQL in real time.
The result is a transparent, replayable replication pipeline you can run for weeks before cutover — useful for any Oracle to PostgreSQL migration that needs zero or near-zero downtime.
This post walks through a working setup end to end, on real Oracle and PostgreSQL servers, with a self-contained banking schema you can paste and reproduce.
Oracle to Postgres Migration: CO Schema Issues and Fixes
Oracle's CO (Customer Orders) schema is the modern replacement for the older OE schema.
It ships with Oracle 19c, it is actively maintained, and it is built the way most real Oracle applications are built today: IDENTITY columns instead of sequence-trigger pairs, JSON stored in BLOB columns, and views that use Oracle-specific SQL functions.
I ran the full migration using ora2pg 25.0 with Oracle 19c as the source and PostgreSQL 18 as the target.
This post covers the five problems that required manual intervention — and why each one will appear in almost every production schema you migrate.
This is the third post in the series.
The HR schema post covered sequence-trigger patterns, %TYPE parameters, and the FK re-apply bug.
The SH schema post covered partitioned tables, bitmap indexes, and materialized views.
CO introduces three new problem categories not seen in either of those.
Continue reading “Oracle to Postgres Migration: CO Schema Issues and Fixes”