Load-balance YSQL connections to YugabyteDB using vanilla C (libpq)

YugabyteDB provides cluster-aware Smart Drivers for Java, Go, and Node.js that automatically route queries to the right node and balance load with full topology awareness.

Starting with PostgreSQL 16, the upstream libpq C client supports load_balance_hosts=random. This option tells libpq to randomize the host list for each new connection, making it easy to distribute sessions across multiple database servers.

Because YugabyteDB’s YSQL API is wire-compatible with PostgreSQL, this feature just works with YugabyteDB.

If you’re writing in C, this is the simplest way to get basic load balancing today… until YugabyteDB’s own cluster-aware Smart Driver for C arrives (currently in the works 🎉).

Why this matters (even if you have a load balancer)

YugabyteDB’s YSQL layer runs on multiple tservers, all capable of serving SQL. If your app or tool can open multiple independent connections, you can shave off a load balancer hop and spread sessions directly at the client:

  • ● Lower latency (one fewer network hop)

  • ● Simpler infra (no L4/L7 in front, when that’s acceptable)

  • ● Easy to test: it’s just a libpq setting

The connection string you need

The magic lives in libpq’s connection parameters:

				
					# Multiple hosts (and optional matching ports)
host=yb-tserver-1,yb-tserver-2,yb-tserver-3
port=5433,5433,5433

# Recommended goodies
dbname=yugabyte user=yugabyte sslmode=require connect_timeout=5

# Randomize host choice on each new connection (requires libpq 16+)
load_balance_hosts=random
				
			

Notes:

  • ● If you provide N hosts and M ports:

    • ○ If M == 1, that single port is used for all hosts.

    • ○ If M == N, each host[i] pairs with port[i].

  • ● Without load_balance_hosts=random, libpq will try hosts in order and only move on if the first fails. With it, the host order is randomized, leading to a more balanced distribution.

Tiny C demo (vanilla libpq)

This program opens --count connections using the same conninfo string and prints where each connection landed via inet_server_addr() and inet_server_port().

c_lbpq_demo.c
				
					// c_lbpq_demo.c
// Build: gcc c_lbpq_demo.c -o c_lbpq_demo -lpq
// Run:   ./c_lbpq_demo --count 12 --conn "host=h1,h2,h3 port=5433 dbname=yugabyte user=yugabyte sslmode=require load_balance_hosts=random"

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <libpq-fe.h>

static void die(const char *msg) {
  fprintf(stderr, "%s\n", msg);
  exit(EXIT_FAILURE);
}

static void check_ok(PGconn *c) {
  if (PQstatus(c) != CONNECTION_OK) {
    fprintf(stderr, "Connection failed: %s\n", PQerrorMessage(c));
    PQfinish(c);
    exit(EXIT_FAILURE);
  }
}

int main(int argc, char **argv) {
  const char *conninfo = NULL;
  int count = 6;

  // super simple flag parse
  for (int i = 1; i < argc; i++) {
    if (!strcmp(argv[i], "--count") && i+1 < argc) {
      count = atoi(argv[++i]);
    } else if (!strcmp(argv[i], "--conn") && i+1 < argc) {
      conninfo = argv[++i];
    }
  }

  if (!conninfo) {
    fprintf(stderr, "Usage: %s --count N --conn \"<libpq conn string>\"\n", argv[0]);
    fprintf(stderr, "Example conn: host=h1,h2,h3 port=5433 dbname=yugabyte user=yugabyte sslmode=require load_balance_hosts=random\n");
    return EXIT_FAILURE;
  }

  for (int i = 0; i < count; i++) {
    PGconn *c = PQconnectdb(conninfo);
    check_ok(c);

    // Show where we landed
    PGresult *r = PQexec(c, "select inet_server_addr(), inet_server_port(), pg_backend_pid()");
    if (PQresultStatus(r) != PGRES_TUPLES_OK) {
      fprintf(stderr, "Query failed: %s\n", PQerrorMessage(c));
      PQclear(r);
      PQfinish(c);
      return EXIT_FAILURE;
    }

    char *addr = PQgetvalue(r, 0, 0);
    char *port = PQgetvalue(r, 0, 1);
    char *pid  = PQgetvalue(r, 0, 2);

    printf("#%02d -> server=%s:%s backend_pid=%s\n", i+1, addr, port, pid);

    PQclear(r);
    PQfinish(c);
  }

  return EXIT_SUCCESS;
}
				
			
Build & run

AlmaLinux 9 / RHEL-like (PG15 client dev headers shown; pick your distro’s package):

				
					# Enable CRB (needed for dependencies):
sudo dnf config-manager --set-enabled crb

# Install EPEL (for perl-IPC-Run):
sudo dnf install -y epel-release

# Add the PostgreSQL PGDG repo:
sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm

# Disable the built-in PostgreSQL module:
sudo dnf -qy module disable postgresql

# Install PostgreSQL 16 client + dev headers:
sudo dnf install -y perl-IPC-Run postgresql16 postgresql16-devel

# Build the demo program:
gcc c_lbpq_demo.c -o c_lbpq_demo -I/usr/pgsql-16/include -L/usr/pgsql-16/lib -lpq

# Run against your cluster:
./c_lbpq_demo --count 12 --conn \
"host=127.0.0.1,127.0.0.2,127.0.0.3 port=5433 dbname=yugabyte user=yugabyte sslmode=prefer load_balance_hosts=random connect_timeout=5"
				
			

Sample output:

				
					[root@localhost ~]# ./c_lbpq_demo --count 12 --conn "host=127.0.0.1,127.0.0.2,127.0.0.3 port=5433 dbname=yugabyte user=yugabyte sslmode=prefer load_balance_hosts=random connect_timeout=5"
#01 -> server=127.0.0.3:5433 backend_pid=692138
#02 -> server=127.0.0.3:5433 backend_pid=692149
#03 -> server=127.0.0.1:5433 backend_pid=692160
#04 -> server=127.0.0.2:5433 backend_pid=692171
#05 -> server=127.0.0.1:5433 backend_pid=692182
#06 -> server=127.0.0.2:5433 backend_pid=692193
#07 -> server=127.0.0.2:5433 backend_pid=692204
#08 -> server=127.0.0.1:5433 backend_pid=692215
#09 -> server=127.0.0.2:5433 backend_pid=692226
#10 -> server=127.0.0.1:5433 backend_pid=692237
#11 -> server=127.0.0.3:5433 backend_pid=692247
#12 -> server=127.0.0.2:5433 backend_pid=692257
				
			
Experiments to try
  • ● Flip load_balance_hosts off

    • Run with it omitted. Connections will prefer the first host unless it’s unavailable.

  • ● Mismatched port list

    • If your port= list is a single value (e.g., 5433), it’s applied to all hosts. If you supply multiple ports (e.g., 5433,5433,6432), libpq pairs host[i] with port[i].

  • ● Fault injection

    • Stop one tserver. With load_balance_hosts=random, new connections will distribute among the remaining healthy hosts. Keep connect_timeout small to avoid delays.

Practical tips
  • ● DNS vs IPs: Prefer DNS names that follow your tserver lifecycle (K8s headless Service, etc.).

  • ● SSL: If sslmode=require, make sure every host supports SSL. Otherwise, use sslmode=prefer for mixed setups.

  • ● Timeouts: Use connect_timeout=5 (or similar) so a down host doesn’t stall your app.

  • ● Pooling: You can still front this with a pool (HikariCP, pgbouncer, YB Connection Manager). Client-side host lists help even pools distribute their initial connections.

Wrapping things up…

With PostgreSQL 16+ client libraries, you get native client-side load balancing via load_balance_hosts=random. Since YugabyteDB’s YSQL is wire-compatible with Postgres, this feature works seamlessly.

It’s a practical workaround for C applications until YugabyteDB’s Smart Driver for C lands, giving you easy, effective distribution of new connections across YSQL servers with just one connection string parameter.

Have Fun!

My wife put out the seeds. Chippy, our backyard pet, put out the cheeks!