All Products
Search
Document Center

PolarDB:wal2json (decoding to JSON)

Last Updated:Jan 31, 2026

PolarDB for PostgreSQL provides the wal2json plugin, which outputs logical log files in JSON format.

Applicability

The supported minor engine versions of PolarDB for PostgreSQL are as follows:

  • PostgreSQL 18 (minor engine version 2.0.18.1.1.0 or later)

  • PostgreSQL 17 (minor engine version 2.0.17.7.5.0 or later)

  • PostgreSQL 16 (minor engine version 2.0.16.6.2.0 or later)

  • PostgreSQL 15 (minor engine version 2.0.15.12.4.0 or later)

  • PostgreSQL 14 (minor engine version 2.0.14.5.1.0 or later)

  • PostgreSQL 11 (minor engine version 2.0.11.9.29.0 or later)

Note

You can view the minor engine version in the console or by running the SHOW polardb_version; statement. If your version is not supported, you must upgrade the minor engine version.

Background information

wal2json is a logical decoding plugin with the following features:

  • It decodes tuples generated by INSERT and UPDATE operations.

  • It can retrieve previous row versions from UPDATE and DELETE operations, based on the configured replica identity.

  • It can consume changes using either the streaming protocol (logical replication slots) or a special SQL API.

The wal2json plugin generates one JSON object for each transaction. This JSON object includes all new and old tuples. You can use options to include properties such as the transaction timestamp, schema, data types, and transaction ID. For more information, see Retrieve JSON objects using SQL.

Notes

  • Because PolarDB for PostgreSQL uses REPLICA_IDENTITY_FULL, the full row data is logged for update and delete operations, not just the data from changed columns. To log only data from changed columns, disable the polar_create_table_with_full_replica_identity parameter. This parameter cannot be modified in the console. Contact us for assistance.

  • The wal2json plugin requires logical decoding. To enable this feature, set the wal_level parameter to logical.

    Note

    You can set the `wal_level` parameter in the console. For more information, see Set cluster parameters. This change requires a cluster restart. Plan your workload accordingly.

Retrieve JSON objects using SQL

You do not create the wal2json plugin using CREATE EXTENSION. Instead, you can load it through a logical replication slot.

  1. After you create a logical replication slot with the wal2json plugin, run the following command to retrieve JSON objects from the WAL.

    -- Create tables with and without a primary key
    CREATE TABLE table2_with_pk (a SERIAL, b VARCHAR(30), c TIMESTAMP NOT NULL, PRIMARY KEY(a, c));
    CREATE TABLE table2_without_pk (a SERIAL, b NUMERIC(5,2), c TEXT);
    
    -- Create a logical replication slot of type wal2json
    SELECT 'init' FROM pg_create_logical_replication_slot('test_slot', 'wal2json');
    
    -- Commit the transaction to write to WAL
    BEGIN;
    INSERT INTO table2_with_pk (b, c) VALUES('Backup and Restore', now());
    INSERT INTO table2_with_pk (b, c) VALUES('Tuning', now());
    INSERT INTO table2_with_pk (b, c) VALUES('Replication', now());
    DELETE FROM table2_with_pk WHERE a < 3;
    
    INSERT INTO table2_without_pk (b, c) VALUES(2.34, 'Tapir');
    UPDATE table2_without_pk SET c = 'Anta' WHERE c = 'Tapir';
    COMMIT;
    
    -- Retrieve JSON objects from WAL
    SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', '1');

    The output is as follows:

    {
        "change": [
            {
                "kind": "insert",
                "schema": "public",
                "table": "table2_with_pk",
                "columnnames": ["a", "b", "c"],
                "columntypes": ["integer", "character varying(30)", "timestamp without time zone"],
                "columnvalues": [1, "Backup and Restore", "2018-03-27 12:05:29.914496"]
            }
            ,{
                "kind": "insert",
                "schema": "public",
                "table": "table2_with_pk",
                "columnnames": ["a", "b", "c"],
                "columntypes": ["integer", "character varying(30)", "timestamp without time zone"],
                "columnvalues": [2, "Tuning", "2018-03-27 12:05:29.914496"]
            }
            ,{
                "kind": "insert",
                "schema": "public",
                "table": "table2_with_pk",
                "columnnames": ["a", "b", "c"],
                "columntypes": ["integer", "character varying(30)", "timestamp without time zone"],
                "columnvalues": [3, "Replication", "2018-03-27 12:05:29.914496"]
            }
            ,{
                "kind": "delete",
                "schema": "public",
                "table": "table2_with_pk",
                "oldkeys": {
                    "keynames": ["a", "c"],
                    "keytypes": ["integer", "timestamp without time zone"],
                    "keyvalues": [1, "2018-03-27 12:05:29.914496"]
                }
            }
            ,{
                "kind": "delete",
                "schema": "public",
                "table": "table2_with_pk",
                "oldkeys": {
                    "keynames": ["a", "c"],
                    "keytypes": ["integer", "timestamp without time zone"],
                    "keyvalues": [2, "2018-03-27 12:05:29.914496"]
                }
            }
            ,{
                "kind": "insert",
                "schema": "public",
                "table": "table2_without_pk",
                "columnnames": ["a", "b", "c"],
                "columntypes": ["integer", "numeric(5,2)", "text"],
                "columnvalues": [1, 2.34, "Tapir"]
            }
        ]
    }
  2. Run the following command to delete the test_slot replication slot. The command returns the string 'stop'.

    SELECT 'stop' FROM pg_drop_replication_slot('test_slot');

Parameter descriptions

The following table describes the wal2json parameters.

Parameter

Description

change

A WAL entry for each DML operation, such as INSERT, UPDATE, DELETE, or TRUNCATE.

changeset

A collection of multiple change entries.

include-xids

Controls whether to add xid to each changeset. Default: false.

  • true: Add xid to each changeset.

  • false (default): Do not add xid to each changeset.

include-timestamp

Controls whether to add a timestamp to each changeset. Default: false.

  • true: Add a timestamp to each changeset.

  • false (default): Do not add a timestamp to each changeset.

include-schemas

Controls whether to add schema names to each change. Default: true.

  • true (default): Add schema names to each change.

  • false: Do not add schema names to each change.

include-types

Controls whether to add data types to each change. Default: true.

  • true (default): Add data types to each change.

  • false: Do not add data types to each change.

include-typmod

Adds modifiers to types that have them (for example, varchar(20) instead of varchar). Default: true.

  • true (default): Add modifiers to types that have them.

  • false: Do not add modifiers to types that have them.

include-type-oids

Controls whether to add type OIDs. Default: false.

  • true: Add type OIDs.

  • false (default): Do not add type OIDs.

include-not-null

Controls whether to add not null information as columnoptionals. Default: false.

  • true: Add not null information as columnoptionals.

  • false (default): Do not add not null information as columnoptionals.

pretty-print

Controls whether to add whitespace and indentation to format the JSON structure. Default: false.

  • true: Add whitespace and indentation to format the JSON structure.

  • false (default): Do not add whitespace and indentation. No formatting is applied.

write-in-chunks

Controls whether to write after each change instead of after each changeset. Default: false.

  • true: Write after each change, not after each changeset.

  • false (default): Write after each changeset, not after each change.

include-lsn

Controls whether to add nextlsn to each changeset. Default: false.

  • true: Add nextlsn to each changeset.

  • false (default): Do not add nextlsn to each changeset.

filter-tables

Excludes specified tables. Default: empty (no tables filtered).

Note
  • Separate multiple tables with commas. Specify the schema for each table.

  • *.foo matches table foo in all schemas. bar.* matches all tables in schema bar.

  • Escape special characters (space, single quote, comma, period, asterisk) with a backslash (\).

  • Schemas and table names are case-sensitive.

  • The table Foo bar in the public schema is referenced as public.Foo\bar.

add-tables

Specifies tables to decode. By default, all tables in all schemas are decoded. Usage is the same as filter-tables.

filter-msg-prefixes

Excludes rows with specified prefixes. Typically used with the pg_logical_slot_peek_changes() function. Default: empty (no filtering). Separate multiple prefixes with commas.

add-msg-prefixes

Includes rows with specified prefixes. Typically used with the pg_logical_slot_peek_changes() function. Default: all prefixes. Separate multiple prefixes with commas. Use filter-msg-prefixes before this parameter.

format-version

Defines the output format version. Default: 1.

  • 1: Use output format version 1.

  • 2: Use output format version 2.

actions

Defines which operations to output. Default: all (INSERT, UPDATE, DELETE, and TRUNCATE). If you use format-version 1, TRUNCATE is not enabled.

Example

The following example shows how to use the include-xids parameter.

  1. Create a table and a logical replication slot, and then insert a row.

    DROP TABLE IF EXISTS tbl;
    CREATE TABLE tbl (id int);
    SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'wal2json');
    INSERT INTO tbl VALUES (1);
  2. Pass the parameter name and value to the function.

    SELECT    
    count(*) = 1,    
    count(distinct ((data::json)->'xid')::text) = 1
    FROM pg_logical_slot_get_changes(    
    'regression_slot', NULL, NULL,    
    'format-version', '1',    
    'include-xids', '1');

Design principles

For more information and design principles, see the official documentation.