Simulating Distributed SQL with YugabyteDB on Kubernetes (Part 1)

Part 1: Installing YugabyteDB on Kubernetes with Minikube

Running a local multi-node YugabyteDB cluster on Kubernetes is an excellent way to test out distributed SQL features on your own workstation. In this first part of our multi-part series, we’ll walk through deploying a 3-node YugabyteDB cluster using Helm on Minikube (running on AlmaLinux 9).

Prerequisites

Ensure the following are installed and configured on your AlmaLinux 9 system:

  1. Minikube
  2. kubectl
  3. Helm
  4. System supports virtualization (KVM2)

Pre-check the required components:

				
					[root@localhost ~]# minikube version
minikube version: v1.36.0
commit: f8f52f5de11fc6ad8244afac475e1d0f96841df1-dirty

[root@localhost ~]# kubectl version --client
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

[root@localhost ~]# helm version
version.BuildInfo{Version:"v3.14.4", GitCommit:"81c902a123462fd4052bc5e9aa9c513c4c8fc142", GitTreeState:"clean", GoVersion:"go1.21.9"}

[root@localhost ~]# systemctl status virtqemud.socket
● virtqemud.socket - libvirt QEMU daemon socket
     Loaded: loaded (/usr/lib/systemd/system/virtqemud.socket; enabled; preset: enabled)
     Active: active (listening) since Fri 2025-07-11 14:30:45 UTC; 39min ago
      Until: Fri 2025-07-11 14:30:45 UTC; 39min ago
   Triggers: ● virtqemud.service
     Listen: /run/libvirt/virtqemud-sock (Stream)
     CGroup: /system.slice/virtqemud.socket

Jul 11 14:30:45 localhost systemd[1]: Listening on libvirt QEMU daemon socket.
				
			
Step 1: Start Minikube

Start a Minikube cluster with enough resources for a 3-node YugabyteDB deployment. I recommend at least 6 CPUs and 8GB of RAM:

				
					minikube start --driver=kvm2 --cpus=6 --memory=8192 --disk-size=40g --force
				
			

Note: I used the --force option in the start command because it’s required when running Minikube as the root user with the kvm2 driver. While running as root isn’t considered best practice, I did so here just to keep the examples simple

Example:

				
					[root@localhost ~]# minikube start --driver=kvm2 --cpus=6 --memory=8192 --disk-size=40g --force
😄  minikube v1.36.0 on Almalinux 9.6 (kvm/amd64)
❗  minikube skips various validations when --force is supplied; this may lead to unexpected behavior
✨  Using the kvm2 driver based on user configuration
🛑  The "kvm2" driver should not be used with root privileges. If you wish to continue as root, use --force.
💡  If you are running minikube within a VM, consider using --driver=none:
📘    https://minikube.sigs.k8s.io/docs/reference/drivers/none/
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🔥  Creating kvm2 VM (CPUs=6, Memory=8192MB, Disk=40960MB) ...
❗  Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.36.0
🐳  Preparing Kubernetes v1.33.1 on Docker 28.0.4 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  /usr/local/bin/kubectl is version 1.30.1, which may have incompatibilities with Kubernetes 1.33.1.
    ▪ Want kubectl v1.33.1? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
				
			
You can verify the node is up and ready with a kubectl command:
				
					kubectl get nodes
				
			

Example:

				
					[root@localhost ~]# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
minikube   Ready    control-plane   2m45s   v1.33.1
				
			

Also, the following command can be executed to view the total allocatable capacity of your Minikube node:

				
					kubectl describe node minikube | grep -A5 "Allocatable"
				
			

Example:

				
					[root@localhost ~]# kubectl describe node minikube | grep -A5 "Allocatable"
Allocatable:
  cpu:                6
  ephemeral-storage:  36372888Ki
  hugepages-2Mi:      0
  memory:             8132752Ki
  pods:               110
--
  Normal  NodeAllocatableEnforced  4m6s  kubelet          Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  4m6s  kubelet          Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m6s  kubelet          Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m6s  kubelet          Node minikube status is now: NodeHasSufficientPID
  Normal  NodeReady                4m5s  kubelet          Node minikube status is now: NodeReady
  Normal  RegisteredNode           4m2s  node-controller  Node minikube event: Registered Node minikube in Controller
				
			
Step 2: Add YugabyteDB Helm Repo

Helm is the preferred way to install YugabyteDB on Kubernetes. Add the official YugabyteDB chart repository:

				
					helm repo add yugabytedb https://charts.yugabyte.com
helm repo update
				
			

Example:

				
					[root@localhost ~]# helm repo add yugabytedb https://charts.yugabyte.com
"yugabytedb" has been added to your repositories

[root@localhost ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "yugabytedb" chart repository
Update Complete. ⎈Happy Helming!⎈
				
			
Step 3: Create Namespace and Install YugabyteDB

Create a dedicated namespace and install YugabyteDB with 3 masters and 3 tservers:

				
					kubectl create namespace yb-demo

helm install yb-demo yugabytedb/yugabyte \
  --namespace yb-demo \
  --create-namespace \
  --set replicas.master=3 \
  --set replicas.tserver=3 \
  --set resource.master.requests.cpu=200m \
  --set resource.master.requests.memory=256Mi \
  --set resource.master.limits.cpu=400m \
  --set resource.master.limits.memory=512Mi \
  --set resource.tserver.requests.cpu=400m \
  --set resource.tserver.requests.memory=512Mi \
  --set resource.tserver.limits.cpu=600m \
  --set resource.tserver.limits.memory=768Mi
				
			

This config keeps total CPU under ~2 cores and total memory under ~2.5 GiB for all 6 pods, ideal for a single-node cluster.

Example:

				
					[root@localhost ~]# kubectl create namespace yb-demo
namespace/yb-demo created
				
			
				
					[root@localhost ~]# helm install yb-demo yugabytedb/yugabyte \
  --namespace yb-demo \
  --create-namespace \
  --set replicas.master=3 \
  --set replicas.tserver=3 \
  --set resource.master.requests.cpu=200m \
  --set resource.master.requests.memory=256Mi \
  --set resource.master.limits.cpu=400m \
  --set resource.master.limits.memory=512Mi \
  --set resource.tserver.requests.cpu=400m \
  --set resource.tserver.requests.memory=512Mi \
  --set resource.tserver.limits.cpu=600m \
  --set resource.tserver.limits.memory=768Mi
NAME: yb-demo
LAST DEPLOYED: Fri Jul 11 15:29:52 2025
NAMESPACE: yb-demo
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get YugabyteDB Pods by running this command:
  kubectl --namespace yb-demo get pods

2. Get list of YugabyteDB services that are running:
  kubectl --namespace yb-demo get services

3. Get information about the load balancer services:
  kubectl get svc --namespace yb-demo

4. Connect to one of the tablet server:
  kubectl exec --namespace yb-demo -it yb-tserver-0 bash

5. Run YSQL shell from inside of a tablet server:
  kubectl exec --namespace yb-demo -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yb-demo

6. Cleanup YugabyteDB Pods
  For helm 2:
  helm delete yb-demo --purge
  For helm 3:
  helm delete yb-demo -n yb-demo
  NOTE: You need to manually delete the persistent volume
  kubectl delete pvc --namespace yb-demo -l app=yb-master
  kubectl delete pvc --namespace yb-demo -l app=yb-tserver

NOTE: The yugabyted UI is now available and is enabled by default. It requires version 2.21.0 or greater.
If you are using a custom image of YugabyteDB that is older than 2.21.0, please disable the UI by setting yugabytedUi.enabled to false.
				
			

You can check the status of the pods:

				
					kubectl get pods -n yb-demo
				
			

All pods should eventually report STATUS: Running.

Example:

				
					[root@localhost ~]# kubectl get pods -n yb-demo
NAME           READY   STATUS    RESTARTS   AGE
yb-master-0    3/3     Running   0          49s
yb-master-1    3/3     Running   0          48s
yb-master-2    3/3     Running   0          48s
yb-tserver-0   3/3     Running   0          49s
yb-tserver-1   3/3     Running   0          48s
yb-tserver-2   3/3     Running   0          48s
				
			
Step 4: Port Forward and Connect

Expose the YSQL service locally:

				
					kubectl port-forward svc/yb-tserver-service -n yb-demo 5433:5433
				
			

Open a new terminal and connect to the cluster using ysqlsh:

				
					ysqlsh -h localhost -p 5433 -U yugabyte
				
			

Example:

				
					[root@localhost ~]# ysqlsh -h localhost -p 5433 -U yugabyte
ysqlsh (15.12-YB-2.25.2.0-b0, server 11.2-YB-2024.2.3.2-b0)
Type "help" for help.

yugabyte=#
				
			

If ysqlsh is not installed, you can download the YugabyteDB client tools from here: YugabyteDB Clients

Step 5: Verify Cluster Functionality

In ysqlsh, run:

				
					SELECT host, cloud, region, zone FROM yb_servers() ORDER BY host;
				
			

You should see all 3 tservers listed.

Example:

				
					yugabyte=# SELECT host, cloud, region, zone FROM yb_servers() ORDER BY host;
                        host                        | cloud  |   region    | zone
----------------------------------------------------+--------+-------------+-------
 yb-tserver-0.yb-tservers.yb-demo.svc.cluster.local | cloud1 | datacenter1 | rack1
 yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local | cloud1 | datacenter1 | rack1
 yb-tserver-2.yb-tservers.yb-demo.svc.cluster.local | cloud1 | datacenter1 | rack1
(3 rows)
				
			
Conclusion

You’ve now successfully deployed a 3-node YugabyteDB cluster on Kubernetes using Minikube. This forms a solid local development and testing environment that mirrors real-world distributed database deployments.

In the next part of this series, we’ll simulate a multi-zone setup and configure each YugabyteDB node to appear as if it’s running in a separate availability zone.

For reference, here are the commands used to install the prerequisite software mentioned at the beginning.

				
					## ## ## ## ## ## ## ## ## ## ## 

## Install MiniKube ##
dnf clean all
dnf update -y

dnf install -y qemu-kvm libvirt libvirt-daemon libvirt-daemon-driver-qemu virt-install virt-viewer virt-top
systemctl daemon-reexec
systemctl daemon-reload
systemctl enable --now virtqemud.socket virtlogd.socket virtlockd.socket
systemctl status virtqemud.socket

systemctl enable --now libvirtd
virsh net-start default
virsh net-autostart default

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
rpm -Uvh minikube-latest.x86_64.rpm

## ## ## ## ## ## ## ## ## ## ## 

## Install kubectl ##
curl -LO https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/
kubectl version --client

## ## ## ## ## ## ## ## ## ## ## 

## Install Helm ##
curl -LO https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz
tar -zxvf helm-v3.14.4-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version

## ## ## ## ## ## ## ## ## ## ## 
				
			

Have Fun!

From the observation deck at New River Gorge National Park in West Virginia, you can catch a distant view of people riding the rapids! Though, if you look closely… most of them seem to be swimming past those big rocks instead!