Exploring Tablet Placement with yb_tablet_metadata

In a distributed SQL database like YugabyteDB, understanding where data actually lives is just as important as understanding the schema itself.

Once data is automatically sharded and replicated across nodes, operators and developers often find themselves asking practical questions such as:

  • ● Which node owns this row?

  • ● Where is the tablet leader?

  • ● Are replicas evenly distributed?

Historically, answering these questions often meant jumping between system tables, node-local views (i.e. yb_local_tablets), and administrative tooling.

Starting in YugabyteDB 2025.2, a new system view simplifies this dramatically: yb_tablet_metadata

This view exposes cluster-wide tablet metadata directly in YSQL, making it easy to reason about tablet placement, leadership, and replica distribution without leaving your SQL client.

A quick note on yb_tablet_metadata vs yb_local_tablets

Prior to YugabyteDB 2025.2, tablet information was available via the yb_local_tablets view. That view is intentionally node-scoped… it only shows tablets hosted by the specific tablet server you are connected to. While useful for local inspection and debugging, it cannot provide a complete cluster-wide view on its own.

The new yb_tablet_metadata view complements this by providing a global perspective:

  • ● All tablets for all tables are visible from any node

  • ● Current tablet leaders and replicas are explicitly exposed

  • ● Tablet hash or range boundaries are included, enabling precise key-to-tablet mapping

In short, yb_local_tablets answers “what lives on this node?”, while yb_tablet_metadata answers “how is this table distributed across the entire universe?”

Demo overview: Which node owns this row?

In this demo, we’ll use yugabyted to spin up a small local cluster and then use yb_tablet_metadata to:

  1. Inspect tablet distribution for a table

  2. Identify tablet leaders and replicas

  3. Map a specific row key to its owning tablet and leader

All using standard YSQL queries.

Step 1: Start a 3-node local universe with yugabyted

On a single machine, start three YugabyteDB nodes and join them into one universe:

				
					# Node 1
./bin/yugabyted start \
  --advertise_address=127.0.0.1 \
  --base_dir=$HOME/yb/node1

# Node 2
./bin/yugabyted start \
  --advertise_address=127.0.0.2 \
  --base_dir=$HOME/yb/node2 \
  --join=127.0.0.1

# Node 3
./bin/yugabyted start \
  --advertise_address=127.0.0.3 \
  --base_dir=$HOME/yb/node3 \
  --join=127.0.0.1
				
			

On macOS, you may need to configure additional loopback aliases for 127.0.0.2 and 127.0.0.3.

Step 2: Create a table with multiple tablets

Create a simple hash-sharded table and explicitly split it into multiple tablets:

				
					CREATE TABLE demo_tablet_map (
  a text PRIMARY KEY,
  payload text
) SPLIT INTO 6 TABLETS;
				
			

Insert some data:

				
					INSERT INTO demo_tablet_map
SELECT 'k' || gs::text, repeat('x', 200)
FROM generate_series(1, 5000) gs;
				
			
Step 3: Inspect tablet distribution, leaders, and replicas

The following query provides a complete picture of how the table is distributed across the cluster:

				
					SELECT
  tablet_id,
  start_hash_code,
  end_hash_code,
  leader,
  replicas
FROM yb_tablet_metadata
WHERE db_name = current_database()
  AND relname = 'demo_tablet_map'
ORDER BY start_hash_code;
				
			

From a single result set, you can see:

  • ● How many tablets the table has

  • ● The hash boundaries for each tablet

  • ● Which node is currently the leader

  • ● Where replicas are placed

Example:

				
					yugabyte=# SELECT
yugabyte-#   tablet_id,
yugabyte-#   start_hash_code,
yugabyte-#   end_hash_code,
yugabyte-#   leader,
yugabyte-#   replicas
yugabyte-# FROM yb_tablet_metadata
yugabyte-# WHERE db_name = current_database()
yugabyte-#   AND relname = 'demo_tablet_map'
yugabyte-# ORDER BY start_hash_code;
            tablet_id             | start_hash_code | end_hash_code |     leader     |                    replicas
----------------------------------+-----------------+---------------+----------------+------------------------------------------------
 de90b06ca0f24f04bdb76fb9cdf4817a |               0 |         10922 | 127.0.0.2:5433 | {127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433}
 636ac37599ef46599e64e9bbe279fa03 |           10922 |         21845 | 127.0.0.1:5433 | {127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433}
 459e10d9c57b4ee19db6991e598e2be7 |           21845 |         32768 | 127.0.0.1:5433 | {127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433}
 02aec5ff5fd34f0aa4983f95c990feca |           32768 |         43690 | 127.0.0.3:5433 | {127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433}
 6288af3adc3b4ffd93e2f4ec2372ddc8 |           43690 |         54613 | 127.0.0.3:5433 | {127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433}
 4fda5c30974d4a9ea85ad0557881d1eb |           54613 |         65536 | 127.0.0.2:5433 | {127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433}
(6 rows)
				
			

Tablet replicas are placed automatically by YugabyteDB, but that doesn’t guarantee a perfectly even distribution at all times.

Temporary imbalances can occur due to:

  • ● Recent node restarts or failures

  • ● Ongoing rebalancing

  • ● A small number of tablets relative to nodes

  • ● Leadership concentration on a subset of nodes

Using yb_tablet_metadata, we can quickly check how replicas are distributed across nodes.

				
					SELECT
  replica,
  count(*) AS replica_count
FROM (
  SELECT unnest(replicas) AS replica
  FROM yb_tablet_metadata
  WHERE relname = 'demo_tablet_map'
) r
GROUP BY replica
ORDER BY replica_count DESC;
				
			

This query answers a simple but important question: How many tablet replicas does each node currently host?

In a perfectly even distribution, each node would host roughly the same number of replicas.

Example:

				
					yugabyte=# SELECT
yugabyte-#   replica,
yugabyte-#   count(*) AS replica_count
yugabyte-# FROM (
yugabyte(#   SELECT unnest(replicas) AS replica
yugabyte(#   FROM yb_tablet_metadata
yugabyte(#   WHERE relname = 'demo_tablet_map'
yugabyte(# ) r
yugabyte-# GROUP BY replica
yugabyte-# ORDER BY replica_count DESC;
    replica     | replica_count
----------------+---------------
 127.0.0.1:5433 |             6
 127.0.0.2:5433 |             6
 127.0.0.3:5433 |             6
(3 rows)
				
			

However, in practice you may see output like:

				
					127.0.0.1:9100   | 8
127.0.0.2:9100   | 6
127.0.0.3:9100   | 4
				
			

This does not indicate a problem by itself… the cluster may still be healthy and fully replicated. It simply reflects the current state of placement and rebalancing.

Step 4: Map a specific key to its tablet and leader

One of the most practical uses of yb_tablet_metadata is troubleshooting access to a specific row.

To determine which tablet and which leader owns the row with key k123:

				
					SELECT
  yb_hash_code('k123'::text) AS hash_code,
  t.tablet_id,
  t.start_hash_code,
  t.end_hash_code,
  t.leader
FROM yb_tablet_metadata t
WHERE t.relname = 'demo_tablet_map'
  AND yb_hash_code('k123'::text) >= t.start_hash_code
  AND yb_hash_code('k123'::text) <  t.end_hash_code;
				
			

This directly maps:

				
					row key → hash code → tablet → leader node
				
			

… using nothing more than SQL.

Example:

				
					yugabyte=# SELECT
yugabyte-#   yb_hash_code('k123'::text) AS hash_code,
yugabyte-#   t.tablet_id,
yugabyte-#   t.start_hash_code,
yugabyte-#   t.end_hash_code,
yugabyte-#   t.leader
yugabyte-# FROM yb_tablet_metadata t
yugabyte-# WHERE t.relname = 'demo_tablet_map'
yugabyte-#   AND yb_hash_code('k123'::text) >= t.start_hash_code
yugabyte-#   AND yb_hash_code('k123'::text) <  t.end_hash_code;
 hash_code |            tablet_id             | start_hash_code | end_hash_code |     leader
-----------+----------------------------------+-----------------+---------------+----------------
      9822 | de90b06ca0f24f04bdb76fb9cdf4817a |               0 |         10922 | 127.0.0.2:5433
(1 row)
				
			

In a previous YugabyteDB tip, Map a Table Row to the Tablet Leader Node, we were doing this same operation the hard way!

Step 5 (for fun): Observe leadership changes in real time

To see tablet leadership move automatically:

  1. Identify a tablet leader from the previous query

  2. Stop that node using yugabyted

  3. Re-run the tablet query

You’ll see leadership transition to another replica immediately, demonstrating YugabyteDB’s built-in fault tolerance!

Example:

				
					yugabyte=# \! yugabyted stop --base_dir=~/yb02
Stopped yugabyted using config /root/yb02/conf/yugabyted.conf.

yugabyte=# SELECT
yugabyte-#   yb_hash_code('k123'::text) AS hash_code,
yugabyte-#   t.tablet_id,
yugabyte-#   t.start_hash_code,
yugabyte-#   t.end_hash_code,
yugabyte-#   t.leader
yugabyte-# FROM yb_tablet_metadata t
yugabyte-# WHERE t.relname = 'demo_tablet_map'
yugabyte-#   AND yb_hash_code('k123'::text) >= t.start_hash_code
yugabyte-#   AND yb_hash_code('k123'::text) <  t.end_hash_code;
 hash_code |            tablet_id             | start_hash_code | end_hash_code |     leader
-----------+----------------------------------+-----------------+---------------+----------------
      9822 | de90b06ca0f24f04bdb76fb9cdf4817a |               0 |         10922 | 127.0.0.1:5433
(1 row)
				
			

Note that the leader for row with key k123 is now 127.0.0.1.

Conclusion

Introduced in YugabyteDB 2025.2, the yb_tablet_metadata view significantly improves day-to-day observability for distributed SQL workloads.

It provides:

  • ● A cluster-wide view of tablet placement

  • ● Clear visibility into tablet leaders and replicas

  • ● A straightforward way to map individual rows to their owning tablets

  • ● SQL-native insight into YugabyteDB’s distributed architecture

By bridging the gap between logical SQL operations and the physical reality of a distributed database, yb_tablet_metadata makes troubleshooting, learning, and system validation far simpler.

If you’ve ever wondered where your data actually lives, this view finally gives you the answer.. directly from YSQL.

Have Fun!

One of my favorite backyard trees! ❄️🌲 Doesn’t she look beautiful all dressed in her snowy holiday coat?