Automatic DDL Replication with xCluster

One of the exciting new features in YugabyteDB 2025.1 is support for automatic DDL replication in xCluster.

In previous releases, xCluster replicated only data changes (DML). If you wanted schema changes (DDL) to take effect across clusters, you had to manually apply them on both source and target clusters. With this new feature, DDLs issued on the source are automatically replicated to the target, keeping your schema in sync with no extra steps.

We’ll use yugabyted to demo this feature. Its built-in xCluster commands make it simple to set up or tear down replication between two universes!

Enabling Automatic DDL Replication

This is a preview flag in 2025.1. To enable it, start each cluster with the following master flag:

				
					./yugabyted start \
  --base_dir=~/yb01 \
  --advertise_address=127.0.0.1 \
  --master_flags="allowed_preview_flags_csv={xcluster_enable_ddl_replication},xcluster_enable_ddl_replication=true"

./yugabyted start \
  --base_dir=~/yb02 \
  --advertise_address=127.0.0.2 \
  --master_flags="allowed_preview_flags_csv={xcluster_enable_ddl_replication},xcluster_enable_ddl_replication=true"

				
			
Setting Up xCluster in Automatic Mode

Create the checkpoint and perform bootstrap if needed:

				
					./yugabyted xcluster create_checkpoint \
  --base_dir=~/yb01 \
  --replication_id test \
  --databases yugabyte \
  --automatic_mode
				
			

Follow the backup/restore instructions if bootstrap is required, then complete the setup:

				
					./yugabyted xcluster set_up \
  --base_dir=~/yb01 \
  --target_address 127.0.0.2 \
  --replication_id test \
  --bootstrap_done
				
			
Example: Partitioned Tables with Foreign Keys

For our test case, we’ll use the schema from the YugabyteDB Tip: Foreign Key References on Partitioned Tables

~/fk_test.sql:

				
					-- 1. Create a partitioned parent table
CREATE TABLE customer_metrics (
  customer_id INT,
  metric_date DATE,
  metric_value NUMERIC,
  PRIMARY KEY (customer_id, metric_date)
)
PARTITION BY RANGE (metric_date);

CREATE TABLE customer_metrics_2025_01 PARTITION OF customer_metrics
  FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');

CREATE TABLE customer_metrics_2025_02 PARTITION OF customer_metrics
  FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');

-- 2. Create a child table referencing the partitioned parent
CREATE TABLE customer_reports (
  report_id SERIAL PRIMARY KEY,
  customer_id INT,
  metric_date DATE,
  report_text TEXT,
  FOREIGN KEY (customer_id, metric_date)
    REFERENCES customer_metrics (customer_id, metric_date)
);
				
			

Run the script on the source cluster:

				
					ysqlsh -h 127.0.0.1 -f ~/fk_test.sql
				
			
Verifying on Source and Target

On the source (127.0.0.1):

				
					[root@localhost bin]# ysqlsh -h 127.0.0.1 -c "\d"
                           List of relations
 Schema |              Name              |       Type        |  Owner
--------+--------------------------------+-------------------+----------
 public | customer_metrics               | partitioned table | yugabyte
 public | customer_metrics_2025_01       | table             | yugabyte
 public | customer_metrics_2025_02       | table             | yugabyte
 public | customer_reports               | table             | yugabyte
 public | customer_reports_report_id_seq | sequence          | yugabyte
(5 rows)
				
			

On the target (127.0.0.2), the exact same objects exist… even though we never applied the SQL there directly!

				
					[root@localhost bin]# ysqlsh -h 127.0.0.2 -c "\d"
                           List of relations
 Schema |              Name              |       Type        |  Owner
--------+--------------------------------+-------------------+----------
 public | customer_metrics               | partitioned table | yugabyte
 public | customer_metrics_2025_01       | table             | yugabyte
 public | customer_metrics_2025_02       | table             | yugabyte
 public | customer_reports               | table             | yugabyte
 public | customer_reports_report_id_seq | sequence          | yugabyte
(5 rows)
				
			
Why This Matters

With automatic DDL replication, you can now:

  • ● Keep schemas consistent across data centers automatically.

  • ● Eliminate the need for manual schema synchronization steps.

  • ● Safely use advanced features like foreign key references on partitioned tables across replicated environments.

Limitations & Caveats

It’s important to know that automatic DDL replication doesn’t (yet) handle all possible schema changes. Below are a few key limitations, distilled from the official YugabyteDB docs:

  • ● Only the YSQL (PostgreSQL-compatible) API is supported. It does not apply to YCQL.

  • ● Not every type of DDL is supported yet… certain schema changes (e.g. altering column types, advanced object types) may fail or require manual intervention.

  • ● The target cluster is read-only… writes must happen only on the primary universe.

  • ● Because replication happens asynchronously, reads on the target occur at a “safe time” (slightly in the past), so there may be visible lag for the most recent changes.

  • ● In the event of source failure, “torn transactions” (partial writes applied at the target) can occur and may require manual reconciliation.

For a comprehensive and up-to-date list of supported/unsupported DDLs, caveats, and behavioral notes, see the official xCluster Limitations documentation: docs.yugabyte.com — Async Replication / Transactional Automatic Mode

Summary

Automatic DDL replication in xCluster (2025.1 preview) is a major enhancement: schema changes (DDL) now flow from primary to standby without manual intervention. The example with partitioned tables + foreign keys shows that even nontrivial constructs are supported.

However, it’s still early: not every DDL is supported yet, and the target cluster remains read-only with replication lag considerations. Always test your intended schema patterns to ensure compatibility.

Have Fun!

Dragon Point, at the Black Canyon of The Gunnison National Park, lives up to its name ... steep cliffs, epic views, and a spot my dragon-loving wife instantly claimed as her favorite.