TL;DR : Most Oracle-to-PostgreSQL data type mappings are straightforward.
The dangerous ones are not.
Oracle DATE includes a time component that PostgreSQL DATE silently discards.
Oracle NUMBER maps to NUMERIC by default, which is correct but slow for integer workloads.
Empty strings in Oracle are NULL; in PostgreSQL they are not.
This guide gives you the full mapping table and covers the five traps that cause silent data corruption during migration.
The most common data problem in an Oracle to PostgreSQL data type mapping exercise isn’t a missing column or a failed import.
It’s a mapping that looks correct, passes validation, and silently changes your data.
Migration projects where dates lost their time component, integer arithmetic slowed to a crawl, and application logic broke because empty strings stopped behaving like NULL — none of these were obvious at the schema conversion stage.
All of them were preventable.
This guide gives you the complete mapping table and explains the decisions that matter.
If you’re at the start of your Oracle to PostgreSQL migration, bookmark this page.
You’ll come back to it.
Table des matières
The Complete Oracle to PostgreSQL Data Type Mapping Table
The table below covers every common Oracle type.
The “Safe Default” column is what ora2pg uses automatically.
The “Recommended” column is what you should actually use after analysing your data.
| Oracle Type | Safe Default | Recommended | Notes |
|---|---|---|---|
| NUMBER | NUMERIC | INTEGER, BIGINT, or NUMERIC | Depends on scale and range — see section below |
| NUMBER(p, 0) | NUMERIC(p) | SMALLINT / INTEGER / BIGINT | Use integer types for better performance |
| NUMBER(p, s) | NUMERIC(p, s) | NUMERIC(p, s) | Exact match |
| VARCHAR2(n) | VARCHAR(n) | VARCHAR(n) or TEXT | n is characters in PG; may need adjustment for byte-semantics columns |
| NVARCHAR2(n) | VARCHAR(n) | VARCHAR(n) | PG is always UTF-8; no separate unicode type needed |
| CHAR(n) | CHAR(n) | CHAR(n) | Direct match |
| DATE | TIMESTAMP | TIMESTAMP | Oracle DATE includes time — never map to PG DATE |
| TIMESTAMP | TIMESTAMP | TIMESTAMP | Direct match |
| TIMESTAMP WITH TIME ZONE | TIMESTAMPTZ | TIMESTAMPTZ | Storage semantics differ — see section below |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMPTZ | TIMESTAMPTZ | Same caveat |
| INTERVAL YEAR TO MONTH | INTERVAL | INTERVAL | PG INTERVAL is a single flexible type |
| INTERVAL DAY TO SECOND | INTERVAL | INTERVAL | Test date arithmetic across month/day boundaries |
| CLOB | TEXT | TEXT | PG TEXT is unlimited in practice (up to ~1 GB via TOAST) |
| NCLOB | TEXT | TEXT | Idem |
| BLOB | BYTEA | BYTEA or lo | lo for large files or streaming use cases |
| RAW(n) | BYTEA | BYTEA | Direct replacement |
| LONG RAW | BYTEA | BYTEA | Deprecated in Oracle; migrate promptly |
| LONG | TEXT | TEXT | Deprecated in Oracle |
| XMLTYPE | XML | XML | PG XML querying uses XPATH() and XMLTABLE() |
| BINARY_FLOAT | REAL | REAL | 4-byte IEEE 754 — direct match |
| BINARY_DOUBLE | DOUBLE PRECISION | DOUBLE PRECISION | 8-byte IEEE 754 — direct match |
| BOOLEAN (Oracle 23c+) | BOULÉEN | BOULÉEN | Pre-23c: NUMBER(1) or CHAR(1) — manual conversion required |
| ROWID / UROWID | No equivalent | Redesign with primary key | PG ctid is not stable across VACUUM or clustering |
| SDO_GEOMETRY | No built-in | PostGIS GEOMETRY | Requires PostGIS extension |
Why Does Oracle DATE Break Everything in PostgreSQL?
Oracle DATE stocke la date et l'heure. PostgreSQL DATE stocke uniquement la date.
Mapping Oracle DATE to PostgreSQL DATE silently discards the time component of every value in that column — no error, no warning, row counts still match.
Toujours mapper Oracle DATE à PostgreSQL TIMESTAMP.
This is the most common silent data corruption in Oracle to PostgreSQL migrations.
The import succeeds.
Row counts match.
The data is wrong.
The correct mapping is always Oracle DATE to PostgreSQL TIMESTAMP.
According to the AWS Oracle to Aurora PostgreSQL Migration Playbook, this is a required mapping, not optional.
ora2pg gets this right by default.
Two follow-on issues to check after remapping:
TRUNC(date) in SQL and PL/SQL.
Oracle developers frequently use TRUNC(sysdate) to strip the time component and get midnight.
In PostgreSQL the equivalent is DATE_TRUNC('day', now()).
Any queries or stored procedures using Oracle’s TRUNC on a date column need to be updated.
Application-layer date comparisons.
Code that compares dates with = TRUNC(sysdate) (checking for today’s records) behaves differently once the column is TIMESTAMP.
Review all date comparison logic in application code, not just in the database.
How Should You Map Oracle NUMBER for Performance?
Never accept NUMERIC for all NUMBER columns.
NUMERIC uses arbitrary-precision arithmetic — correct but slow for integer workloads.
Analyse each column’s actual range and scale: use INTEGER for values up to 2.1 billion, BIGINT for 64-bit range, NUMERIC(p,s) for fixed decimal precision.
Override ora2pg defaults with the MODIFY_TYPE directive.
The right approach is to analyse your actual data first:
- NUMBER(p, 0) where values fit in 32 bits: use INTEGER (up to 2.1 billion)
- NUMBER(p, 0) where values fit in 64 bits: use BIGINT (up to 9.2 × 10¹⁸)
- NUMBER(p, s) with fixed decimal precision: use NUMERIC(p, s)
- NUMBER with no precision (floating calculations): consider DOUBLE PRECISION for speed, NUMERIC for exactness
AWS published a two-part analysis on how to assess Oracle NUMBER columns and assign the right PostgreSQL type.
The process involves querying Oracle to find the actual min, max, and scale distribution for each column before deciding.
In ora2pg, you override the default with the MODIFICATION_TYPE directive in your configuration file:
MODIFY_TYPE ORDERS.TOTAL_AMOUNT:numeric(15,2),ORDERS.STATUS_CODE:integerThis is column-level.
You set it per table after analysing each column’s real data profile.
Skipping this step and accepting NUMERIC everywhere is one of the most common causes of poor PostgreSQL performance immediately after migration.
What Happens to Oracle CLOB and BLOB?
Oracle CLOB maps to PostgreSQL TEXT, which handles values up to approximately 1 GB via TOAST storage — no application changes needed for the data itself.
Oracle BLOB maps to BYTEA for most use cases, or the lo type for streaming.
The migration work is rewriting DBMS_LOB package calls; there is no PostgreSQL equivalent.
The exception is application code that uses Oracle’s DBMS_LOB package: DBMS_LOB.READ, DBMS_LOB.WRITE, DBMS_LOB.GETLENGTH, and so on.
These are Oracle-specific LOB APIs.
They must be rewritten to use standard SQL string functions in PostgreSQL.
There is no DBMS_LOB equivalent in PostgreSQL.
BLOB is more nuanced.
PostgreSQL has two options:
BYTEA stores binary data inline in the row, up to approximately 1 GB.
It’s simpler and works well for images, documents, and small-to-medium binary content.
Large Objects (lo type) store data in the pg_largeobject system catalog and support streaming access via lo_read and lo_write.
This is closer to Oracle BLOB semantics for applications that stream binary content in chunks.
Large Objects require explicit lifecycle management: you must call lo_unlink() to delete them, or use the lo extension to handle this automatically via triggers.
For most migrations, BYTEA is the right choice unless you have a specific streaming requirement.
Five Type Traps That Catch Teams Off Guard
These don’t always show up in automated schema conversion reports.
They break things at runtime, often in edge cases that aren’t covered by basic testing.
1. Empty string equals NULL in Oracle.
Oracle VARCHAR2 treats an empty string ('' ) comme NULL.
PostgreSQL treats them as different values.
Any SQL or application logic that relies on column IS NULL to catch both NULL and empty strings will break silently.
Any NOT NULL constraint on a VARCHAR2 column in Oracle does not prevent empty strings.
In PostgreSQL, it does prevent empty strings.
Audit every NOT NULL constraint and every IS NULL / IS NOT NULL check in your codebase before cutover.
2. VARCHAR2 byte vs character semantics.VARCHAR2(20 BYTE) et VARCHAR2(20 CHAR) are different in Oracle.
PostgreSQL VARCHAR(n) always means characters.
A column defined as VARCHAR2(20 BYTE) in a single-byte Oracle database may hold up to 20 characters.
The same column migrated with AL32UTF8 encoding could hold fewer characters if multibyte characters are present.
When migrating from Oracle databases using WE8MSWIN1252 or similar single-byte character sets, verify column lengths explicitly after conversion.
3. BOOLEAN before Oracle 23c.
Oracle had no native BOOLEAN type in SQL before version 23c (released in 2023).
Common workarounds were NUMBER(1) with a check constraint (0 for false, 1 for true) or CHAR(1) (‘Y’/‘N’).
Neither is automatically detected as a boolean candidate by ora2pg.
You need to identify these columns manually and configure MODIFICATION_TYPE to map them to PostgreSQL BOOLEAN, along with updating any application logic that inserts 0/1 or ‘Y’/‘N’ values.
4. ROWID and UROWID.
Oracle applications sometimes use ROWID for fast row re-fetch: store the ROWID of a row, then use it to retrieve that row quickly later.
PostgreSQL’s physical row identifier (ctid) is not stable.
It changes when a row is updated, when VACUUM FULL runs, or when a table is clustered.
Any application logic that stores and reuses ROWIDs must be redesigned with a proper primary key.
5. TIMESTAMP WITH TIME ZONE storage semantics.
Oracle TIMESTAMP WITH TIME ZONE stores the original offset (for example, 2024-03-15 14:30:00 +02:00).
PostgreSQL TIMESTAMPTZ always converts to UTC internally and applies the session timezone on display.
The value appears identical in output but the internal representation differs.
This matters when your application reads timezone offsets from the database directly, or when you’re comparing values across sessions with different timezone settings.
Test timezone-sensitive queries explicitly.
Not sure which of these traps apply to your schema?
I offer a fixed-fee migration assessment that reviews your data types, PL/SQL volume, and application SQL dependencies — and delivers a written risk register before any migration work starts.
Voir ce que couvre l'évaluation
What Does ora2pg Handle Automatically?
ora2pg correctly maps DATE to TIMESTAMP, CLOB and NCLOB to TEXT, BLOB and RAW to BYTEA, BINARY_FLOAT to REAL, BINARY_DOUBLE to DOUBLE PRECISION, and XMLTYPE to XML.
What it does not handle: NUMBER optimisation to integer types, boolean column detection, VARCHAR2 encoding differences, and DBMS_LOB rewrites.
These require manual MODIFY_TYPE configuration.
What ora2pg does not handle automatically:
- NUMBER to integer types. Every NUMBER column defaults to NUMERIC. You must use the
MODIFICATION_TYPEdirective to override at the column level. - Boolean detection. NUMBER(1) and CHAR(1) columns used as booleans are not auto-detected. Manual configuration required.
- VARCHAR2 byte vs character semantics. ora2pg does not adjust column lengths for encoding differences. Manual review required for non-UTF8 source databases.
- ROWID columns. ora2pg flags these but does not replace them. Manual redesign required.
- DBMS_LOB calls in PL/SQL. ora2pg converts PL/SQL structure but does not rewrite LOB API calls. These require manual replacement.
Le official ora2pg documentation covers the full configuration reference.
Le DATA_TYPE directive allows global type remapping.
MODIFICATION_TYPE allows column-level overrides.
The Data Type Decision Checklist
Before running your schema conversion, work through this list:
- Identify every Oracle DATE column and confirm it maps to TIMESTAMP
- Query each NUMBER column to determine actual value range and scale
- Set MODIFY_TYPE overrides for integer-range NUMBER(p,0) columns
- Identify NUMBER(1) and CHAR(1) boolean columns and configure MODIFY_TYPE
- Check the Oracle source character set for VARCHAR2 byte vs character semantics
- Audit every
IS NULLcheck in application code for empty string handling - List every ROWID usage in application code and design primary key replacements
- Identify DBMS_LOB calls in PL/SQL and plan rewrites
- Test TIMESTAMP WITH TIME ZONE queries across different session timezone settings
Getting the type mapping right before you start saves significant rework after data migration.
A migration assessment catches most of these before the project begins.
Foire aux questions
Does ora2pg handle all Oracle data type conversions automatically?
ora2pg handles most common conversions correctly, including DATE to TIMESTAMP, CLOB to TEXT, and BLOB to BYTEA.
It does not automatically optimise NUMBER to integer types, detect boolean columns from NUMBER(1) or CHAR(1) workarounds, or adjust VARCHAR2 lengths for byte vs character semantics.
These require manual configuration using the MODIFY_TYPE directive in ora2pg.conf.
What is the PostgreSQL equivalent of Oracle DATE?
The correct PostgreSQL equivalent of Oracle DATE is TIMESTAMP, not DATE.
Oracle DATE stores both date and time (year, month, day, hour, minute, second).
PostgreSQL DATE stores only the date.
Mapping Oracle DATE to PostgreSQL DATE silently discards the time component of every value.
This is one of the most common silent data corruption issues in Oracle to PostgreSQL migrations.
How do I convert Oracle NUMBER columns efficiently in PostgreSQL?
Analyse the actual data range and scale for each NUMBER column before mapping.
For columns with scale 0 (integers) that fit within 32 bits, use INTEGER.
For columns within 64-bit range, use BIGINT.
For fixed decimal precision, use NUMERIC(p,s).
Reserve NUMERIC without precision for columns that genuinely require arbitrary precision.
Use ora2pg’s MODIFY_TYPE directive to configure column-level overrides.
Defaulting everything to NUMERIC is correct but causes measurable performance degradation on integer workloads.
What happens to Oracle BOOLEAN in PostgreSQL?
Oracle had no native SQL BOOLEAN type before version 23c.
Pre-23c Oracle schemas typically used NUMBER(1) (0/1) or CHAR(1) (‘Y’/‘N’) as boolean workarounds.
These are not automatically converted to PostgreSQL BOOLEAN by migration tools.
You need to identify these columns manually, configure MODIFY_TYPE in ora2pg to map them to BOOLEAN, and update application code that inserts or reads 0/1 or ‘Y’/‘N’ values.
Oracle 23c databases with native BOOLEAN columns map directly to PostgreSQL BOOLEAN.
Can PostgreSQL handle Oracle CLOB data?
Yes.
PostgreSQL TEXT is the correct mapping for Oracle CLOB.
PostgreSQL TEXT has no hard size limit; values up to approximately 1 GB are handled automatically via TOAST storage.
The data migration itself is straightforward.
The work is in rewriting any application code that uses Oracle’s DBMS_LOB package (DBMS_LOB.READ, DBMS_LOB.WRITE, DBMS_LOB.GETLENGTH, etc.), as these Oracle-specific LOB APIs have no direct PostgreSQL equivalent and must be replaced with standard SQL string functions.
En résumé
The Oracle to PostgreSQL data type mapping is not a one-to-one exercise.
Most types map cleanly.
A handful — DATE, NUMBER, empty strings, BOOLEAN workarounds, and ROWID — require deliberate decisions and manual configuration.
The two rules that prevent most post-migration data problems:
- Never map Oracle DATE to PostgreSQL DATE. Always use TIMESTAMP.
- Never accept NUMERIC for all NUMBER columns. Analyse ranges and use integer types where appropriate.
If you’re planning an Oracle to PostgreSQL migration and want to get the data type strategy right from the start, prendre contact →
