Oracle's HR schema is the closest thing the database world has to a universal reference.
Every Oracle DBA has seen it.
Every migration consultant uses it to explain how the process works.
What makes it genuinely useful as a case study is not its size — it is seven tables and 107 employees.
What makes it useful is that it contains, in minimal form, the exact patterns that make Oracle-to-PostgreSQL migrations non-trivial: sequences wired to triggers, stored procedures, type mismatches that compile cleanly and fail at runtime.
I ran the full migration using ora2pg 25.0 with Oracle 19c as the source and PostgreSQL 18 as the target.
This post covers the five things that required manual intervention — and why each one matters far more on a production schema than it does on a demo.
Table of Contents
What HR Contains
The HR schema is Oracle's standard reference schema, shipped with every Oracle installation and available on GitHub.
It has seven tables covering the standard HR hierarchy: regions, countries, locations, departments, jobs, employees, and job history.
It has three standalone sequences (employees_seq, departments_seq, locations_seq), one view (EMP_DETAILS_VIEW), two stored procedures (secure_dml, add_job_history), and five triggers.
Three of those triggers exist solely to call sequence.NEXTVAL on INSERT — a pattern Oracle developers used before DEFAULT with sequences was widely adopted.
The remaining two implement business logic: one restricts DML to office hours, one writes a history row when an employee changes job.
ora2pg rates the HR schema B-5: code rewriting required, approximately one person-day for a PostgreSQL expert.
The rating is driven entirely by the stored procedures and triggers.
The tables, view, and data migrate with no intervention.
1. NUMBER Type Mapping: The Monetary Column Trap
ora2pg has two directives that handle most NUMBER columns automatically.
PG_INTEGER_TYPE 1 maps NUMBER(p) columns with no scale to the correct integer type: precision 1–4 becomes smallint, 5–9 becomes integer, 10–18 becomes bigint.
For the HR schema this means EMPLOYEE_ID NUMBER(6) becomes INTEGER, DEPARTMENT_ID NUMBER(4) becomes SMALLINT — no manual work needed.
The trap is PG_NUMERIC_TYPE 1, which handles columns that have a declared scale.
This directive does not just look at the declared type — it queries the actual data values.
If a NUMBER(8,2) column currently holds only whole-number values, it maps the column to double precision rather than numeric.
EMPLOYEES.SALARY is NUMBER(8,2) in Oracle.
Every salary in the HR dataset is a whole number: 24000, 17000, 9000. With PG_NUMERIC_TYPE 1 active, ora2pg maps SALARY to double precision.
double precision is a floating-point type.
Floating-point types are wrong for monetary values — they cannot represent all decimal fractions exactly, which produces rounding errors in financial calculations.
The correct type for salary is numeric.
The fix is a MODIFY_TYPE override:
MODIFY_TYPE EMPLOYEES:SALARY:numeric
In every real production migration I assess, salary and similar monetary columns need this override.
The PG_NUMERIC_TYPE directive is doing its job correctly — it is inferring types from data.
The inference is just wrong for money, where the declared precision matters more than what the current data happens to contain.
2. The Sequence Trigger Pattern
In Oracle, the standard pattern for auto-incrementing primary keys was:
- Create a standalone sequence.
- Create a BEFORE INSERT trigger that fires for each row and assigns
sequence.NEXTVALto the ID column.
This is how HR is built. EMPLOYEES.EMPLOYEE_ID has no DEFAULT clause in Oracle — the trigger populates it on every INSERT.
In PostgreSQL, the equivalent is a column DEFAULT nextval('sequence_name').
When a row is inserted without specifying the ID column, PostgreSQL calls nextval() automatically. No trigger is needed.
ora2pg converts the Oracle sequence triggers — it exports them as PostgreSQL trigger functions.
But the TABLE export does not add DEFAULT nextval(...) to the column definitions, because Oracle never had one. After loading the exported DDL, the tables have no automatic ID generation. Any INSERT without an explicit employee_id will fail with null value in column violates not-null constraint.
The fix is to add the defaults after the triggers are loaded:
ALTER TABLE hr.employees ALTER COLUMN employee_id SET DEFAULT nextval('hr.employees_seq');
ALTER TABLE hr.departments ALTER COLUMN department_id SET DEFAULT nextval('hr.departments_seq');
ALTER TABLE hr.locations ALTER COLUMN location_id SET DEFAULT nextval('hr.locations_seq');
Also bind each sequence to its column with OWNED BY so that dropping the table cascades to the sequence automatically.
Without OWNED BY, the sequences become orphans that must be dropped separately.
3. %TYPE in Procedure Parameter Signatures
add_job_history is exported by ora2pg with Oracle anchored-type syntax in its parameter list:
-- What ora2pg exports
CREATE OR REPLACE PROCEDURE hr.add_job_history (
p_emp_id job_history.employee_id%TYPE,
p_start_date job_history.start_date%TYPE,
...
PostgreSQL supports %TYPE inside a function or procedure body for variable declarations. It does not support %TYPE in the parameter signature.
Loading the exported file fails immediately with a syntax error.
The fix is to substitute the concrete types:
CREATE OR REPLACE PROCEDURE hr.add_job_history (
p_emp_id integer,
p_start_date timestamp,
p_end_date timestamp,
p_job_id varchar(10),
p_department_id smallint
)
The correct types come from the column analysis done before the migration — EMPLOYEE_ID is INTEGER, START_DATE is TIMESTAMP, and so on.
This is why the type analysis step is not optional: you need the final PostgreSQL types to write this fix.
On a production schema with dozens of stored procedures, this pattern appears frequently.
Every procedure that uses anchored-type parameters in Oracle needs the same manual substitution before it will load in PostgreSQL.
4. The statement_timestamp() Type Mismatch
The update_job_history trigger calls add_job_history to write a history row when an employee changes job.
It needs to pass the current time as the p_end_date argument.
ora2pg converts Oracle's SYSDATE to statement_timestamp(). statement_timestamp() returns timestamp with time zone in PostgreSQL. add_job_history expects timestamp (no time zone) for p_end_date.
PostgreSQL does not implicitly cast timestamptz to timestamp in procedure calls.
The trigger function loads without error — CREATE FUNCTION succeeds.
The type mismatch only surfaces at runtime, the first time an employee changes job or department.
This is the most insidious category of migration bug: code that compiles cleanly and fails in production.
I use plpgsql_check to catch exactly this class of error.
It performs static analysis on the compiled function bodies and reports type mismatches, missing variables, and invalid procedure calls — all before any application testing.
CREATE EXTENSION plpgsql_check;
Running plpgsql_check against the trigger function in this lab produced:
trigger_fct_update_job_history() | error | procedure add_job_history(integer,
timestamp without time zone, timestamp with time zone, character varying, smallint)
does not exist
The fix is a single cast:
CALL add_job_history(OLD.employee_id, OLD.hire_date, statement_timestamp()::TIMESTAMP,
OLD.job_id, OLD.department_id);
plpgsql_check is not a standard part of the ora2pg workflow — it is not run by default, not mentioned in the ora2pg documentation, and not included in any PostgreSQL package by default.
I run it on every migration before handing the migrated code to the application team. It has caught runtime bugs on every production schema I have migrated.
5. FK Constraints Are Not Re-Applied After Data Load
ora2pg's COPY export drops all foreign key constraints at the start of the data file, loads the data, and then commits.
It does not re-add the FK constraints after the commit.
This is a known bug — issue #1960 in the darold/ora2pg repository.
After loading the data file, the migrated database has zero FK constraints.
The ora2pg -t TEST comparison will show every table as having zero FKs in PostgreSQL, even though the FK definitions exist in the exported DDL.
The fix is to re-apply the FK constraints from the table export after the data load:
grep 'ADD CONSTRAINT.*FOREIGN KEY' HR_tables.sql | \
PGPASSWORD=hr psql -U hr -d hrdb -h localhost
For the HR schema, this restores 10 FK constraints across five tables.
On a production schema with hundreds of tables, the FK count is a critical validation checkpoint — if it does not match Oracle's count after the migration, the data model is incomplete.
Running the Migration: Every Command in Order
Environment
Two VMs on the same network. srv1 (192.168.0.180) runs Oracle 19c with the HR schema in PDB pdb1. srv2 (192.168.0.181) runs Ubuntu with PostgreSQL 18 and ora2pg installed. All commands below run on srv2 unless stated otherwise.
Step 1 — Collect Oracle statistics
Run this on srv1 before generating the assessment report. ora2pg's cost estimates are based on Oracle's stored statistics — stale stats produce inaccurate row counts and migration unit estimates.
-- On Oracle (srv1)
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS('HR');
DBMS_STATS.GATHER_DICTIONARY_STATS;
END;
/
Step 2 — Create the output directory
mkdir -p /home/fernando/ora2pg-hr/output
Step 3 — Minimal ora2pg.conf
# /etc/ora2pg/ora2pg.conf
ORACLE_DSN dbi:Oracle:host=192.168.0.180;service_name=pdb1;port=1521
ORACLE_USER system
ORACLE_PWD <system password>
SCHEMA HR
EXPORT_SCHEMA 1
OUTPUT_DIR /home/fernando/ora2pg-hr/output
Use system, not hr — the COPY export reads v$database which requires DBA privilege.
Step 4 — Generate the assessment report
ora2pg -t SHOW_REPORT --estimate_cost --dump_as_html \
> /home/fernando/ora2pg-hr/output/report-hr.html
Open the HTML in a browser.
HR scores B-5, approximately one person-day.
Cost drivers are the two stored procedures and two business-logic triggers.
Step 5 — Analyse column types
ora2pg -t SHOW_COLUMN > /home/fernando/ora2pg-hr/output/columns-hr.txt
Then query actual data ranges on Oracle for any NUMBER column with no declared precision:
-- On Oracle (srv1)
SELECT 'REGIONS.REGION_ID' AS col, MIN(region_id), MAX(region_id) FROM regions
UNION ALL
SELECT 'COUNTRIES.REGION_ID', MIN(region_id), MAX(region_id) FROM countries;
Both max at 50 — INTEGER is the correct mapping.
Step 6 — Add MODIFY_TYPE and re-save the config
MODIFY_TYPE REGIONS:REGION_ID:integer,COUNTRIES:REGION_ID:integer,EMPLOYEES:SALARY:numeric
EMPLOYEES:SALARY:numeric is required because PG_NUMERIC_TYPE 1 inspects actual data, finds only whole-number salary values, and maps NUMBER(8,2) to double precision — wrong for a monetary column.
All three overrides on a single line — MODIFY_TYPE ignores everything after the first line.
Step 7 — Run the exports
ora2pg -t TABLE -o HR_tables.sql 2>&1 | tee output/table-export-hr.log
ora2pg -t VIEW -o HR_views.sql 2>&1 | tee output/view-export-hr.log
ora2pg -t SEQUENCE -o HR_sequences.sql 2>&1 | tee output/sequence-export-hr.log
ora2pg -t SEQUENCE_VALUES -o HR_sequence_values.sql 2>&1 | tee output/seqval-export-hr.log
ora2pg -t TRIGGER -o HR_triggers.sql 2>&1 | tee output/trigger-export-hr.log
ora2pg -t PROCEDURE -o HR_procedures.sql 2>&1 | tee output/procedure-export-hr.log
ora2pg -t COPY -o HR_data.sql 2>&1 | tee output/copy-export-hr.log
The TABLE export does not include sequences — the separate SEQUENCE export is mandatory. Without it, the sequence_values load in Step 12 fails with “relation does not exist”.
Step 8 — Write the manual fix files
Two files that cannot be generated from the ora2pg output.
HR_procedures_fixed.sql — replaces %TYPE anchored parameters with concrete types:
SET search_path = hr,public;
CREATE OR REPLACE PROCEDURE hr.add_job_history (
p_emp_id integer,
p_start_date timestamp,
p_end_date timestamp,
p_job_id varchar(10),
p_department_id smallint
) AS $body$
BEGIN
INSERT INTO job_history(employee_id, start_date, end_date, job_id, department_id)
VALUES (p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END;
$body$
LANGUAGE PLPGSQL SECURITY DEFINER;
CREATE OR REPLACE PROCEDURE hr.secure_dml () AS $body$
BEGIN
IF TO_CHAR(clock_timestamp(), 'HH24:MI') NOT BETWEEN '08:00' AND '18:00'
OR TO_CHAR(clock_timestamp(), 'DY') IN ('SAT', 'SUN') THEN
RAISE EXCEPTION '%', 'You may only make changes during normal office hours'
USING ERRCODE = '45205';
END IF;
END;
$body$
LANGUAGE PLPGSQL SECURITY DEFINER;
HR_trigger_fixes.sql — casts statement_timestamp() to avoid the timestamptz mismatch, and adds sequence defaults to the ID columns:
SET search_path = hr,public;
CREATE OR REPLACE FUNCTION hr.trigger_fct_update_job_history() RETURNS trigger AS $BODY$
BEGIN
CALL add_job_history(OLD.employee_id, OLD.hire_date,
statement_timestamp()::TIMESTAMP,
OLD.job_id, OLD.department_id);
RETURN NEW;
END
$BODY$
LANGUAGE 'plpgsql' SECURITY DEFINER;
ALTER TABLE hr.employees ALTER COLUMN employee_id SET DEFAULT nextval('hr.employees_seq');
ALTER TABLE hr.departments ALTER COLUMN department_id SET DEFAULT nextval('hr.departments_seq');
ALTER TABLE hr.locations ALTER COLUMN location_id SET DEFAULT nextval('hr.locations_seq');
ALTER SEQUENCE hr.employees_seq OWNED BY hr.employees.employee_id;
ALTER SEQUENCE hr.departments_seq OWNED BY hr.departments.department_id;
ALTER SEQUENCE hr.locations_seq OWNED BY hr.locations.location_id;
Step 9 — Create the database
sudo -u postgres psql -c "CREATE USER hr WITH PASSWORD 'hr';"
sudo -u postgres psql -c "CREATE DATABASE hrdb OWNER hr;"
Step 10 — Load in dependency order
# Tables first — everything else depends on them
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_tables.sql
# Sequences — must exist before data load and before sequence_values
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_sequences.sql
# Data — loads rows via COPY FROM stdin; FKs are dropped inside the file before load
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_data.sql
Step 11 — Re-apply FK constraints
ora2pg's COPY file drops all FK constraints before the data load and never re-adds them. Without this step the database has zero FK constraints after Step 10.
(echo "SET search_path TO hr;"; \
grep 'ADD CONSTRAINT.*FOREIGN KEY' \
/home/fernando/ora2pg-hr/output/HR_tables.sql) | \
PGPASSWORD=hr psql -U hr -d hrdb -h localhost
Verify the count:
PGPASSWORD=hr psql -U hr -d hrdb -h localhost -c "
SELECT COUNT(*) FROM information_schema.table_constraints
WHERE constraint_schema = 'hr' AND constraint_type = 'FOREIGN KEY';"
Expected: 10
Step 12 — Load the remaining files
# Reset sequences to Oracle's last values — run after data, not before
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_sequence_values.sql
# Procedures — must load before triggers (triggers call these procedures)
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_procedures_fixed.sql
# Triggers — after procedures
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_triggers.sql
# Trigger fixes — replaces the timestamptz trigger function, adds sequence defaults
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_trigger_fixes.sql
# View — after all tables are in place
PGPASSWORD=hr psql -U hr -d hrdb -h localhost \
-f /home/fernando/ora2pg-hr/output/HR_views.sql
Step 13 — Static analysis with plpgsql_check
Install the extension, then check all stored code before any application testing:
sudo apt install -y postgresql-18-plpgsql-check
sudo -u postgres psql -d hrdb -c "CREATE EXTENSION plpgsql_check;"
Check procedures and functions:
PGPASSWORD=hr psql -U hr -d hrdb -h localhost -c "
SELECT pg_proc.oid::regprocedure AS function, level, message
FROM pg_proc,
plpgsql_check_function_tb(pg_proc.oid) AS pcf
WHERE pronamespace = 'hr'::regnamespace
AND prolang = (SELECT oid FROM pg_language WHERE lanname = 'plpgsql')
AND prorettype <> 'trigger'::regtype
ORDER BY 1, 2;"
Check trigger functions:
PGPASSWORD=hr psql -U hr -d hrdb -h localhost -c "
SELECT pg_proc.oid::regprocedure AS function, level, message
FROM pg_proc
JOIN pg_trigger ON pg_trigger.tgfoid = pg_proc.oid,
plpgsql_check_function_tb(pg_proc.oid, pg_trigger.tgrelid) AS pcf
WHERE pg_proc.pronamespace = 'hr'::regnamespace
AND pg_proc.prolang = (SELECT oid FROM pg_language WHERE lanname = 'plpgsql')
AND pg_proc.prorettype = 'trigger'::regtype
ORDER BY 1, 2;"
Expected after all fixes are applied: (0 rows) on both queries.
Step 14 — Run the TEST comparison
Add the PostgreSQL connection to ora2pg.conf:
PG_DSN dbi:Pg:dbname=hrdb;host=localhost;port=5432
PG_USER hr
PG_PWD hr
ora2pg -t TEST 2>&1 | tee /home/fernando/ora2pg-hr/output/migration_diff-hr.txt
What the Final TEST Report Looks Like
After all five fixes are applied, ora2pg -t TEST compares the Oracle source and the PostgreSQL target side by side. Every section should show OK except two known false positives:
COLUMN DEFAULT VALUE DIFF — Oracle used triggers to populate ID columns; PostgreSQL uses DEFAULT nextval(...). The three tables where we added defaults will show a count difference. This is the correct PostgreSQL design — not an error.
FUNCTION COUNT DIFF — PostgreSQL stores trigger logic in a separate function object that has no equivalent in Oracle's data dictionary. The two trigger functions (trigger_fct_secure_employees, trigger_fct_update_job_history) appear as extra PostgreSQL functions with no Oracle counterpart. This is also expected.
Any other DIFF line in the TEST output is a real problem that must be resolved before cutover.
In Summary
HR is a small schema. Fixing five issues on a 107-row dataset is a half-day of work.
On a real production schema — 50 tables, 30 stored procedures, complex trigger chains, 10 million rows — the same five categories of problem exist, at 10× the volume.
The monetary column trap affects every schema that stores prices, salaries, or financial figures.
The %TYPE parameter problem affects every schema with stored procedures.
The statement_timestamp() type mismatch affects every trigger that records a current timestamp.
The FK re-apply problem affects every schema with foreign keys.
The sequence-trigger-to-default pattern affects every Oracle schema built before version 12c.
The value of running the migration on HR first is not the migration itself — it is building the checklist that you apply to every schema that follows.
If you are planning an Oracle-to-PostgreSQL migration and want an independent assessment of complexity, effort, and risk before committing to a timeline, I offer a fixed-price migration audit that produces exactly that.

One Reply to “ora2pg Migration Example: Oracle HR Schema to PostgreSQL”