Simulating a Multi-Cluster YugabyteDB + Istio Architecture on a Single Linux VM

Tip #1 explains the architectural concepts: multi-cluster boundaries, service discovery, and mesh-based identity, that this lab puts into practice.

In this tip, we simulate a real multi-cluster YugabyteDB + Istio architecture using three Kubernetes clusters running on a single Linux virtual machine.

While this environment does not model geographic distance or cloud-specific networking, it exercises the same architectural primitives… separate control planes, distinct cluster identities, cross-cluster service discovery, and mesh-based security, that are required for multi-region and multi-cloud YugabyteDB deployments.

The goal of this lab is not realism for its own sake, but understanding: by keeping the environment small and reproducible, we can focus on how the architecture works and why each component exists.

If you understand this setup, you understand the foundation of every production multi-cluster YugabyteDB deployment on Kubernetes.

What you’ll learn

By the end of this lab, you will be able to:

  • ● Create multiple independent Kubernetes clusters on a single Linux VM

  • ● Design a non-overlapping Pod and Service CIDR plan for multi-cluster environments

  • ● Verify baseline cross-cluster network reachability before introducing a service mesh

  • ● Understand how Istio turns separate clusters into a single logical network

  • ● Validate the exact networking and discovery behaviors required by multi-cluster YugabyteDB deployments

0) Prereqs on AlmaLinux 9

This lab is demonstrated on AlmaLinux 9, but the same steps apply to most modern Linux distributions with minimal adjustments.

0.1 Install Docker
				
					sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf -y install docker-ce docker-ce-cli containerd.io

sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker
				
			

Quick check:

				
					docker ps
				
			
0.2 Install kubectl

Pick a version (this example uses a stable-ish pinned version; adjust as you like):

				
					KUBECTL_VERSION="v1.29.6"
curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
sudo install -m 0755 kubectl /usr/local/bin/kubectl
rm -f kubectl

kubectl version --client
				
			
0.3 Install kind
				
					KIND_VERSION="v0.24.0"
curl -Lo kind "https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64"
sudo install -m 0755 kind /usr/local/bin/kind
rm -f kind

kind version
				
			

Why kind?
kind lets us create multiple, fully independent Kubernetes clusters with explicit networking and minimal abstraction… perfect for understanding multi-cluster Istio behavior.

1) Cluster topology & CIDR plan

We’re going to create three independent Kubernetes clusters on one VM:

  • region-a

  • region-b

  • region-c

The only “region-like” thing we care about in this lab is separation:

  • ● separate control planes

  • ● separate pod networks (no overlap)

  • ● routable node-to-node connectivity (via Docker network)

CIDR Plan (non-overlapping)

Use distinct Pod and Service CIDRs per cluster:

ClusterPod CIDRService CIDR
region-a10.10.0.0/1610.11.0.0/16
region-b10.20.0.0/1610.21.0.0/16
region-c10.30.0.0/1610.98.0.0/16

This avoids the #1 cause of multi-cluster pain: overlapping networks.

2) kind cluster creation YAML (3 clusters)

Create a working directory:

				
					mkdir -p ~/yb-multicluster-lab/kind
				
			

We will create three YAML files. The key here is using different Pod and Service CIDRs for each cluster to prevent any networking overlaps in the Istio registry.

				
					# Region A
cat <<EOF > ~/yb-multicluster-lab/kind/region-a.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: region-a
networking:
  podSubnet: "10.10.0.0/16"
  serviceSubnet: "10.11.0.0/16"
nodes:
- role: control-plane
EOF

# Region B
cat <<EOF > ~/yb-multicluster-lab/kind/region-b.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: region-b
networking:
  podSubnet: "10.20.0.0/16"
  serviceSubnet: "10.21.0.0/16"
nodes:
- role: control-plane
EOF

# Region C
cat <<EOF > ~/yb-multicluster-lab/kind/region-c.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: region-c
networking:
  podSubnet: "10.30.0.0/16"
  serviceSubnet: "10.31.0.0/16"
nodes:
- role: control-plane
EOF
				
			

Create the clusters

				
					kind create cluster --name region-a --config ~/yb-multicluster-lab/kind/region-a.yaml
kind create cluster --name region-b --config ~/yb-multicluster-lab/kind/region-b.yaml
kind create cluster --name region-c --config ~/yb-multicluster-lab/kind/region-c.yaml
				
			

Example:

				
					[root@localhost kind]# kind create cluster --name region-a --config ~/yb-multicluster-lab/kind/region-a.yaml
Creating cluster "region-a" ...
 ✓ Ensuring node image (kindest/node:v1.31.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-region-a"
You can now use your cluster with:

kubectl cluster-info --context kind-region-a

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

[root@localhost kind]# kind create cluster --name region-b --config ~/yb-multicluster-lab/kind/region-b.yaml
Creating cluster "region-b" ...
 ✓ Ensuring node image (kindest/node:v1.31.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-region-b"
You can now use your cluster with:

kubectl cluster-info --context kind-region-b

Thanks for using kind! 😊

[root@localhost kind]# kind create cluster --name region-b --config ~/yb-multicluster-lab/kind/region-c.yaml
Creating cluster "region-c" ...
 ✓ Ensuring node image (kindest/node:v1.31.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-region-c"
You can now use your cluster with:

kubectl cluster-info --context kind-region-c

Thanks for using kind! 😊
				
			

Set convenience variables

				
					# Define contexts for easy use
export CTX_A="kind-region-a"
export CTX_B="kind-region-b"
export CTX_C="kind-region-c"
				
			
🛡️ Why we are doing this

In a multi-cluster Istio setup, Context Naming is everything. If all clusters are named “kind,” the istioctl remote-secret commands will fail because they won’t be able to distinguish between Cluster A and Cluster B. By naming them region-a, region-b, etc., we ensure the “Cross-Cluster Secret Link” step later on will work perfectly.

3) Verify that all three clusters were created successfully

Run the following command to check which clusters kind actually sees:

				
					kind get clusters
				
			

Expected output:

				
					region-a
region-b
region-c
				
			
3.1 🚦 Check Kubernetes Contexts

Next, verify that your local kubeconfig has all three contexts available for your CTX variables to work:

				
					kubectl config get-contexts
				
			

Expected output:

				
					CURRENT   NAME            CLUSTER         AUTHINFO        NAMESPACE
          kind-region-a   kind-region-a   kind-region-a
          kind-region-b   kind-region-b   kind-region-b
*         kind-region-c   kind-region-c   kind-region-c
				
			
🛠️ Final Sanity Check: Reachability

Run this loop to make sure you can actually communicate with the API server of all three clusters:

				
					for CTX in "${CTX_A}" "${CTX_B}" "${CTX_C}"; do
  echo "Checking $CTX..."
  kubectl --context="$CTX" get nodes
done
				
			

Expected output:

				
					Checking kind-region-a...
NAME                     STATUS   ROLES           AGE   VERSION
region-a-control-plane   Ready    control-plane   15m   v1.31.0
Checking kind-region-b...
NAME                     STATUS   ROLES           AGE   VERSION
region-b-control-plane   Ready    control-plane   15m   v1.31.0
Checking kind-region-c...
NAME                     STATUS   ROLES           AGE   VERSION
region-c-control-plane   Ready    control-plane   15m   v1.31.0
				
			
4) Install Istio
				
					cd ~/yb-multicluster-lab
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.24.2 TARGET_ARCH=x86_64 sh -
export PATH="$PWD/istio-${ISTIO_VERSION}/bin:$PATH"
istioctl version
				
			

Expected output:

				
					client version: 1.24.2
control plane version: 1.24.2
data plane version: 1.24.2 (3 proxies)
				
			
5) Establish Shared Trust

With all three clusters verified as Ready, we can now establish the cryptographic foundation. This is what allows pods in Cluster A to trust pods in Cluster C.

In this step, we are telling Istio:

  • Don’t generate your own random certificates. Use these specific ones so all clusters speak the same security language.

This is the “one mesh trust domain” concept from Tip #1, but now in practice.

				
					cd ~/yb-multicluster-lab/istio-${ISTIO_VERSION}

for CTX in "${CTX_A}" "${CTX_B}" "${CTX_C}"; do
  kubectl --context="$CTX" create namespace istio-system
  kubectl --context="$CTX" -n istio-system create secret generic cacerts \
    --from-file=ca-cert.pem=./samples/certs/ca-cert.pem \
    --from-file=ca-key.pem=./samples/certs/ca-key.pem \
    --from-file=root-cert.pem=./samples/certs/root-cert.pem \
    --from-file=cert-chain.pem=./samples/certs/cert-chain.pem
done
				
			

Expected outout:

				
					--- Installing Shared Root CA in kind-region-a ---
namespace/istio-system created
secret/cacerts created
--- Installing Shared Root CA in kind-region-b ---
namespace/istio-system created
secret/cacerts created
--- Installing Shared Root CA in kind-region-c ---
namespace/istio-system created
secret/cacerts created
				
			
6) Install Istio Discovery (Primary)

Now that the secrets are in place, install the Istio control plane into each cluster.

				
					for NET in a b c; do
  CTX="kind-region-$NET"
  istioctl install --context="$CTX" -y --set values.global.network=net-$NET
done
				
			

Expected output:

				
					--- Installing Istio in kind-region-a (Network: net-a) ---
        |\
        | \
        |  \
        |   \
      /||    \
     / ||     \
    /  ||      \
   /   ||       \
  /    ||        \
 /     ||         \
/______||__________\
____________________
  \__       _____/
     \_____/

WARNING: Istio 1.24.0 may be out of support (EOL) already: see https://istio.io/latest/docs/releases/supported-releases/ for supported releases
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Ingress gateways installed 🛬
✔ Installation complete
--- Installing Istio in kind-region-b (Network: net-b) ---
        |\
        | \
        |  \
        |   \
      /||    \
     / ||     \
    /  ||      \
   /   ||       \
  /    ||        \
 /     ||         \
/______||__________\
____________________
  \__       _____/
     \_____/

WARNING: Istio 1.24.0 may be out of support (EOL) already: see https://istio.io/latest/docs/releases/supported-releases/ for supported releases
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Ingress gateways installed 🛬
✔ Installation complete
--- Iinstalling Istio in kind-region-c (Network: net-c) ---
        |\
        | \
        |  \
        |   \
      /||    \
     / ||     \
    /  ||      \
   /   ||       \
  /    ||        \
 /     ||         \
/______||__________\
____________________
  \__       _____/
     \_____/

WARNING: Istio 1.24.0 may be out of support (EOL) already: see https://istio.io/latest/docs/releases/supported-releases/ for supported releases
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Ingress gateways installed 🛬
✔ Installation complete
				
			
7) Cross-Network Connectivity (East-West Gateways)

Once the main installation finishes without errors, move on to the East-West gateways. These are the critical “bridges” for the multi-cluster YugabyteDB traffic.  They allow traffic to flow between Docker containers.

				
					for NET in a b c; do
  CTX="kind-region-$NET"
  samples/multicluster/gen-eastwest-gateway.sh --network net-$NET | \
    istioctl --context="$CTX" install -y -f -
  
  # Expose services
  kubectl --context="$CTX" apply -n istio-system -f samples/multicluster/expose-services.yaml
done
				
			

Verify istiod is ready everywhere:

				
					for CTX in "${CTX_A}" "${CTX_B}" "${CTX_C}"; do
  echo "--- Gateways in $CTX ---"
  kubectl --context="$CTX" -n istio-system get pods -l istio=eastwestgateway
done
				
			

Expected output:

				
					--- Gateways in kind-region-a ---
NAME                                     READY   STATUS    RESTARTS   AGE
istio-eastwestgateway-7b87d89698-9g92q   1/1     Running   0          17s
--- Gateways in kind-region-b ---
NAME                                     READY   STATUS    RESTARTS   AGE
istio-eastwestgateway-774ff99778-8jd2m   1/1     Running   0          14s
--- Gateways in kind-region-c ---
NAME                                    READY   STATUS    RESTARTS   AGE
istio-eastwestgateway-f4ccccdf4-2nt5r   1/1     Running   0          12s
				
			
8) MeshNetwork Topology Mapping
8.1 Capture Docker Networking Metadata

Because we are in Kind, we need the Docker container IPs and the NodePorts to map the “MeshNetwork.”

				
					RA_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' region-a-control-plane)
RB_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' region-b-control-plane)
RC_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' region-c-control-plane)

RA_PORT=$(kubectl --context="${CTX_A}" -n istio-system get svc istio-eastwestgateway -o jsonpath='{.spec.ports[?(@.port==15443)].nodePort}')
RB_PORT=$(kubectl --context="${CTX_B}" -n istio-system get svc istio-eastwestgateway -o jsonpath='{.spec.ports[?(@.port==15443)].nodePort}')
RC_PORT=$(kubectl --context="${CTX_C}" -n istio-system get svc istio-eastwestgateway -o jsonpath='{.spec.ports[?(@.port==15443)].nodePort}')
				
			
8.2 Link Control Planes (Remote Secrets)

Use the Internal Docker IPs for the --server address.

				
					istioctl create-remote-secret --context="${CTX_A}" --name=region-a --server="https://${RA_IP}:6443" | kubectl apply -f - --context="${CTX_B}"
istioctl create-remote-secret --context="${CTX_A}" --name=region-a --server="https://${RA_IP}:6443" | kubectl apply -f - --context="${CTX_C}"

istioctl create-remote-secret --context="${CTX_B}" --name=region-b --server="https://${RB_IP}:6443" | kubectl apply -f - --context="${CTX_A}"
istioctl create-remote-secret --context="${CTX_B}" --name=region-b --server="https://${RB_IP}:6443" | kubectl apply -f - --context="${CTX_C}"

istioctl create-remote-secret --context="${CTX_C}" --name=region-c --server="https://${RC_IP}:6443" | kubectl apply -f - --context="${CTX_A}"
istioctl create-remote-secret --context="${CTX_C}" --name=region-c --server="https://${RC_IP}:6443" | kubectl apply -f - --context="${CTX_B}"
				
			

🔍 Architectural Note: The 10-Second Latency

If your curl tests are successful but show a 10.1s total time, don’t panic! You are seeing Istio’s Passive Health Checking in action.

The Cause: The remote secrets we created allow Cluster C to see the internal Pod IPs ($10.x.x.x$) of other regions. Because we are on a local machine, these internal IPs are unreachable. The Istio proxy (Envoy) tries to connect to them, hits a TCP timeout, and then intelligently fails over to our working ServiceEntry gateways.


We will keep these secrets for now as they are required for cross-cluster database management. We will look at how to optimize this discovery “noise” and eliminate the lag in Section 9.3.

8.3 🔍 Verification Checklist

After running the commands above, you should verify the status in each cluster.

Check Cluster A:

				
					istioctl remote-clusters --context="${CTX_A}"
				
			

Expected: synced status for region-b and region-c.

				
					NAME           SECRET                                        STATUS     ISTIOD
Kubernetes                                                   synced     istiod-6fbf849d94-t9w9x
region-b       istio-system/istio-remote-secret-region-b     synced     istiod-6fbf849d94-t9w9x
region-c       istio-system/istio-remote-secret-region-c     synced     istiod-6fbf849d94-t9w9x
				
			

Check Cluster B:

				
					istioctl remote-clusters --context="${CTX_B}"
				
			

Expected: synced status for region-a and region-c.

				
					NAME           SECRET                                        STATUS     ISTIOD
Kubernetes                                                   synced     istiod-6fbf849d94-ql6pq
region-a       istio-system/istio-remote-secret-region-a     synced     istiod-6fbf849d94-ql6pq
region-c       istio-system/istio-remote-secret-region-c     synced     istiod-6fbf849d94-ql6pq
				
			

Check Cluster C:

				
					istioctl remote-clusters --context="${CTX_C}"
				
			

Expected: synced status for region-a and region-b.

				
					NAME           SECRET                                        STATUS     ISTIOD
Kubernetes                                                   synced     istiod-6fbf849d94-2vr8w
region-a       istio-system/istio-remote-secret-region-a     synced     istiod-6fbf849d94-2vr8w
region-b       istio-system/istio-remote-secret-region-b     synced     istiod-6fbf849d94-2vr8w
				
			
9) Istio Multi-Cluster “Smoke Test”

Before committing to a complex stateful deployment like YugabyteDB, it is essential to validate the underlying “plumbing” of your service mesh. This verification phase uses a lightweight HelloWorld application to prove that three critical systems are functioning in harmony: Global Service Discovery, Cross-Network Routing, and Mutual TLS (mTLS) Security.

9.1 Deploy HelloWorld & Curl
				
					for NET in a b c; do
  CTX="kind-region-$NET"
  kubectl --context="$CTX" create namespace sample
  kubectl --context="$CTX" label namespace sample istio-injection=enabled
done

# Deploy Service everywhere
for CTX in "${CTX_A}" "${CTX_B}" "${CTX_C}"; do
  kubectl --context="$CTX" -n sample apply -f samples/helloworld/helloworld.yaml -l service=helloworld
done

# Version 1 in A, Version 2 in B, Curl client in C
kubectl --context="${CTX_A}" -n sample apply -f samples/helloworld/helloworld.yaml -l version=v1
kubectl --context="${CTX_B}" -n sample apply -f samples/helloworld/helloworld.yaml -l version=v2
kubectl --context="${CTX_C}" -n sample apply -f samples/curl/curl.yaml
				
			
9.2 Solving Latency with a Manual Network Override (ServiceEntry)

While Istio’s automatic discovery is great, in our Kind environment it picks up unreachable internal IPs. To eliminate the 10-second connection delay, we will use a ServiceEntry to ‘force-map’ the traffic directly to our working East-West Gateways.

				
					cat <<EOF | kubectl --context="${CTX_C}" -n sample apply -f -
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: helloworld-manual-fix
spec:
  hosts:
  - helloworld.sample.svc.cluster.local
  location: MESH_INTERNAL
  ports:
  - number: 5000
    name: http
    protocol: HTTP
  resolution: STATIC
  endpoints:
  - address: $RA_IP
    network: net-a
    ports:
      http: $RA_PORT
  - address: $RB_IP
    network: net-b
    ports:
      http: $RB_PORT
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: helloworld-mtls-fix
spec:
  host: helloworld.sample.svc.cluster.local
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF
				
			
9.3 🔍 Verification
				
					POD_C=$(kubectl --context="${CTX_C}" -n sample get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')

kubectl --context="${CTX_C}" -n sample exec -c curl "$POD_C" -- \
  sh -lc 'for i in $(seq 1 10); do curl -sS --connect-timeout 5 helloworld.sample:5000/hello; echo ""; sleep 0.5; done'
				
			

Expected output:

				
					Hello version: v2, instance: helloworld-v2-7dcd9b496d-d8zwz

Hello version: v2, instance: helloworld-v2-7dcd9b496d-d8zwz

Hello version: v1, instance: helloworld-v1-6d65866976-7r98h

Hello version: v2, instance: helloworld-v2-7dcd9b496d-d8zwz

Hello version: v1, instance: helloworld-v1-6d65866976-7r98h

Hello version: v1, instance: helloworld-v1-6d65866976-7r98h

Hello version: v1, instance: helloworld-v1-6d65866976-7r98h

Hello version: v2, instance: helloworld-v2-7dcd9b496d-d8zwz

Hello version: v2, instance: helloworld-v2-7dcd9b496d-d8zwz

Hello version: v1, instance: helloworld-v1-6d65866976-7r98h
				
			
What to look for:
  • ● Load Balancing: Notice how the version toggles between v1 and v2.

  • ● Cross-Cluster: Note that Cluster C has no HelloWorld pods; every single response is traveling over the virtual wire to Region A or B.

If you observe a consistent 10-second delay in your requests, you are witnessing Istio’s retry logic. Because we linked our clusters using create-remote-secret, Cluster C has discovered the internal Pod IPs ($10.x.x.x$) of Regions A and B. In our Kind-on-Linux lab, these IPs are unreachable across clusters

Can we fix this? Yes. Deleting the remote secrets on Cluster C would immediately remove these ‘Ghost IPs’ and eliminate the lag. However, we are not going to do that.

In the next part of this series, we will deploy YugabyteDB Anywhere. That management plane requires these secrets to communicate with the Kubernetes APIs in other regions. For now, we will accept the discovery ‘noise’ in exchange for a fully functional multi-cluster control plane.”

Conclusion: From Smoke Test to State

If you’ve followed along this far, you now have a fully functional, multi-network Service Mesh running on a single Linux VM. By verifying our connectivity with the HelloWorld app, we’ve confirmed that our certificates are trusted, our gateways are routing, and our manual overrides are bypassing the limitations of a local Docker environment.

We kept those “Ghost IP” secrets in Section 9.3 for a reason: YugabyteDB Anywhere.

In the third and final part of this series, we will move from stateless testing to stateful reality. We will deploy YugabyteDB across our three regions, leveraging our Istio fabric to ensure that a SQL query hitting Region C can seamlessly and securely retrieve data replicated across Regions A and B.

Stay tuned for Part 3: Deploying Global YugabyteDB on Istio! (Link coming soon!)

Have Fun!