Multi-region same-key writes: let the database arbitrate, let Kafka distribute

Introduction

When building multi-region systems, it’s tempting to think:

  • “We can just use a message bus like Kafka to handle concurrent writes … it scales, it’s fast, and we can resolve conflicts later.”

It sounds neat in theory… every region publishes its updates to Kafka, consumers process the messages, and eventually the system “catches up.”

But in practice, this idea falls apart the moment two regions write to the same key at roughly the same time.

Kafka is fantastic at moving events, but it’s not designed to arbitrate truth. It doesn’t know which update came first, can’t enforce constraints like “balance can’t go negative,” and can’t apply read-modify-write logic atomically. You’ll end up building your own version of distributed transaction handling, complete with version tracking, retries, deduplication, and the dreaded “reconciliation job.”

Databases like YugabyteDB, on the other hand, are built for this.

They can handle concurrent writers across regions using per-key Raft consensus, ACID transactions, and standard PostgreSQL isolation levels like READ COMMITTED and SERIALIZABLE. That means:

  • ● Only one writer “wins” a conflicting update.

  • ● Business rules (like balance ≥ 0) are enforced atomically.

  • ● Reads see a consistent snapshot of the world … no ambiguity, no races.

Then, once the database has established the single, durable order of events, you can use Change Data Capture (CDC) to publish the committed truth to Kafka.

Kafka does what it’s great at, distributing events, while YugabyteDB does what it’s great at, deciding which ones are valid.

In this YugabyteDB Tip, we’ll see both sides in action:

  1. How YugabyteDB cleanly handles simultaneous, same-key writes from multiple “regions.”

  2. How the committed truth is streamed to Kafka via CDC for downstream processing.

1️⃣ Setup: table + seed data
				
					-- Session A
CREATE TABLE acct (
  id        BIGINT PRIMARY KEY,
  balance   NUMERIC(12,2) NOT NULL CHECK (balance >= 0),
  updated_at TIMESTAMPTZ  NOT NULL DEFAULT now()
);

INSERT INTO acct (id, balance) VALUES (1, 100.00);

SELECT * FROM acct WHERE id = 1;
				
			

Expected:

				
					 id  | balance |        updated_at
----+---------+----------------------------
  1 |  100.00 | 2025-10-08 16:20:00.000-04
				
			
2️⃣ Why the “Kafka-only” approach fails

This is what teams sometimes try with a bus:

  1. Consumer reads current balance.

  2. Applies a business rule in code.

  3. Emits an “update” or writes back later.

Two regions do this at nearly the same time and think there’s enough money:

  • ● Region-East: “Balance is 100, withdraw 70 → looks fine.”

  • ● Region-West: “Balance is 100, withdraw 60 → looks fine.”

If both updates go through, the account becomes −30! The bus happily carried both events; it didn’t arbitrate the conflict. You now need custom reconciliation logic and backfills.

3️⃣ The correct pattern: transactional updates in YugabyteDB

Instead, express your business rule as one atomic, read-modify-write statement.

Open two ysqlsh sessions (pretend they’re two regions). Paste the same update quickly in both.

Session A (Region-East)

				
					console.log( 'Code is Poetry' );BEGIN ISOLATION LEVEL READ COMMITTED;
UPDATE acct
   SET balance = balance - 70, updated_at = now()
 WHERE id = 1 AND balance >= 70
 RETURNING *;
				
			

Session B (Region-West)

				
					BEGIN ISOLATION LEVEL READ COMMITTED;
UPDATE acct
   SET balance = balance - 60, updated_at = now()
 WHERE id = 1 AND balance >= 60
 RETURNING *;
				
			

What happens (and why this is great):

  • YugabyteDB establishes a single order of writes for the row’s shard (via Raft).

  • One of the updates locks the row, wins, and commits.

  • The other runs second; when it evaluates balance >= 60, the balance is now 30, so the predicate fails and it updates 0 rows … no overdraft, no ambiguity.

You’ll see exactly one session return a row from RETURNING, the other returns no rows. Commit both and verify:

				
					-- Either session
COMMIT;
SELECT * FROM acct WHERE id = 1;
				
			

Expected final result:

				
					 id  | balance |        updated_at
----+---------+----------------------------
  1 |   30.00 | 2025-10-08 16:21:23.142-04
				
			

Key point: You didn’t write any reconciliation code.

The database arbitrated the conflict using ACID + row locking + predicate checks.

4️⃣ Try a stricter isolation level (SERIALIZABLE)

If you prefer belt-and-suspenders, rerun the same two updates under SERIALIZABLE.

YugabyteDB’s MVCC and conflict detection will ensure only one succeeds; the other transaction will fail with a serialization error, forcing the app to retry.

				
					BEGIN ISOLATION LEVEL SERIALIZABLE;
-- same UPDATE as above
COMMIT;
				
			

Takeaway: You get correctness out of the box … at the isolation level you choose.

5️⃣ Demonstrating the “lost update” trap

If you split the check from the update (a classic anti-pattern):

				
					BEGIN;
SELECT balance FROM acct WHERE id = 1; -- both see 100
UPDATE acct SET balance = balance - 70 WHERE id = 1;
COMMIT;
				
			

Both regions might see the same balance and apply their own changes. Unless you add FOR UPDATE or use predicate logic, you’re racing yourself.

YugabyteDB provides the tools to prevent this … Kafka does not.

6️⃣ Stream the committed truth to Kafka (CDC)

Once the database has decided the winner, publish the resulting row change to Kafka.

Here’s a simple Debezium YugabyteDB (gRPC) connector config:

				
					{
  "name": "yb-grpc-acct-cdc",
  "config": {
    "connector.class": "io.debezium.connector.yugabytedb.YugabyteDBgRPCConnector",
    "tasks.max": "1",
    "database.hostname": "yb1",
    "database.port": "5433",
    "database.user": "yugabyte",
    "database.password": "yugabyte",
    "database.dbname": "yugabyte",
    "table.include.list": "public.acct",
    "topic.prefix": "yb",
    "snapshot.mode": "initial",
    "poll.interval.ms": "500"
  }
}
				
			

Kafka topic: yb.public.acct

Example event (simplified):

				
					{
  "op": "u",
  "before": {"id": 1, "balance": "100.00"},
  "after":  {"id": 1, "balance": "30.00"},
  "source": {"db": "yugabyte", "schema": "public", "table": "acct"}
}
				
			

That’s the truth, conflict-free, ready for any downstream consumers or analytics.

Closing Thoughts: Kafka is the Messenger, YugabyteDB is the Judge

When data races across regions, someone must decide who wins. Kafka can’t — it’s just the courier. It guarantees message delivery and ordering within partitions, but not across distributed writers. There’s no concept of a constraint, transaction, or isolation level inside the log.

A distributed database like YugabyteDB, however, can:

  • Enforce constraints and conditions at write time.

  • Serialize updates across regions using Raft consensus.

  • Guarantee ACID semantics for every transaction.

  • Expose standard PostgreSQL isolation levels to control concurrency behavior.

Then, once correctness is established, you can safely stream changes to Kafka via CDC … confident that you’re distributing the canonical truth.

So, next time someone says “Kafka can handle conflicting writes,” you can smile and say:

  • “Kafka can deliver the messages. YugabyteDB decides what’s true.”

Have Fun!

These colorful cloths tied to the trees around Devils Tower are sacred offerings left by Indigenous visitors. Each one carries prayers and hopes for healing, guidance, and connection to the Creator.