To manage YugabyteDB, you can use yugabyted. yugabyted acts as a parent server across the YB-TServer and YB-Masters servers. yugabyted also provides a UI similar to the YugabyteDB Anywhere UI, with a data placement map and metrics dashboard.
yugbayted makes it super easy and convenient to quickly stand up a YugabyteDB cluster to run some quick tests, like a super cool new feature that just landed in the latest release!
Example:
I create and destroy YugabyteDB clusters all of the time to try out new features. To save time, I have created several scripts that allow me to work with a simulated multi-node cluster on a single host.
For this example, I am looking to do some testing of the READ COMMITTED isolation level. Below is a script to create a multi-node cluster on a single host using the local loopback addresses.
Note the T-Server gFlags listed on line 21 that will be applied to each node to enable READ COMMITTED.
# Setup
# Cluster nodes
# Format: IP,cloud.region.zone,base directory path
all_nodes=(
"127.0.0.1,aws.us-east-1.us-east-1a,~/yb01"
"127.0.0.2,aws.us-east-2.us-east-2a,~/yb02"
"127.0.0.3,aws.us-west-2.us-west-2a,~/yb03"
"127.0.0.4,aws.us-east-1.us-east-1a,~/yb04"
"127.0.0.5,aws.us-east-2.us-east-2a,~/yb05"
"127.0.0.6,aws.us-west-2.us-west-2a,~/yb06"
"127.0.0.7,aws.us-east-1.us-east-1a,~/yb07"
"127.0.0.8,aws.us-east-2.us-east-2a,~/yb08"
"127.0.0.9,aws.us-west-2.us-west-2a,~/yb09"
)
# Fault Domain (region, zone or node)
fd="region"
# TSERVER gFlags
ts_gf="yb_enable_read_committed_isolation=true,enable_deadlock_detection=true,enable_wait_queues=true"
# MSERVER gFlags
ms_gf=""
# Start first node
printf "Starting primary node: %s\n" $(echo ${all_nodes[0]} | awk -F',' '{ print $1 }')
yugabyted start --advertise_address=$(echo ${all_nodes[0]} | awk -F',' '{ print $1 }') --cloud_location $(echo ${all_nodes[0]} | awk -F',' '{ print $2 }') --base_dir=$(echo ${all_nodes[0]} | awk -F',' '{ print $3 }') --fault_tolerance=$fd --tserver_flags="$ts_gf" --master_flags="$ms_gf" > /dev/null
until pg_isready -h $(echo ${all_nodes[0]} | awk -F',' '{ print $1 }') -p 5433 ; do sleep 1 ; done
# Start remaining nodes in parallel
total_nodes=${#all_nodes[@]}
num_nodes_to_add=$(($total_nodes - 1))
for (( n=1; n<=$num_nodes_to_add; n++ ))
do
printf "Starting node: %s\n" $(echo ${all_nodes[$n]} | awk -F',' '{ print $1 }')
yugabyted start --advertise_address=$(echo ${all_nodes[$n]} | awk -F',' '{ print $1 }') --join=$(echo ${all_nodes[0]} | awk -F',' '{ print $1 }') --cloud_location $(echo ${all_nodes[$n]} | awk -F',' '{ print $2 }') --base_dir=$(echo ${all_nodes[$n]} | awk -F',' '{ print $3 }') --fault_tolerance=$fd --tserver_flags="$ts_gf" --master_flags="$ms_gf" > /dev/null &
sleep 2
done
nc=`ysqlsh -h $(echo ${all_nodes[0]} | awk -F',' '{ print $1 }') -U yugabyte -Atc "SELECT COUNT(*) FROM yb_servers();"`
c=0
while [ $nc -lt $total_nodes ] && [ $c -le 60 ];
do
echo "Status: $nc of $total_nodes up..."
sleep 5
((c++))
nc=`ysqlsh -h $(echo ${all_nodes[0]} | awk -F',' '{ print $1 }') -U yugabyte -Atc "SELECT COUNT(*) FROM yb_servers();"`
done
if [ $nc == $total_nodes ]
then
echo "All $nc nodes are up!"
else
echo "Only $nc nodes have started successfully..."
fi
yugabyted configure data_placement --base_dir=$(echo ${all_nodes[0]} | awk -F',' '{ print $3 }') --fault_tolerance=$fd
Running the script will create a nine node cluster with threee nodes per region, where the cluster can survive a region outage.
[root@localhost yb2]# ./create_all_simple.sh
Starting primary node: 127.0.0.1
127.0.0.1:5433 - no response
127.0.0.1:5433 - accepting connections
Starting node: 127.0.0.2
Starting node: 127.0.0.3
Starting node: 127.0.0.4
Starting node: 127.0.0.5
Starting node: 127.0.0.6
Starting node: 127.0.0.7
Starting node: 127.0.0.8
Starting node: 127.0.0.9
All 9 nodes are up!
+---------------------------------------------------------------------------------------------------+
| yugabyted |
+---------------------------------------------------------------------------------------------------+
| Status : Configuration successful. Primary data placement is geo-redundant. |
| Fault Tolerance : Primary Cluster can survive at most any 1 region failure. |
+---------------------------------------------------------------------------------------------------+
[root@localhost yb2]# ysqlsh -h 127.0.0.1 -c "SELECT host, cloud, region, zone FROM yb_servers() ORDER BY cloud, region, zone;"
host | cloud | region | zone
-----------+-------+-----------+------------
127.0.0.1 | aws | us-east-1 | us-east-1a
127.0.0.4 | aws | us-east-1 | us-east-1a
127.0.0.7 | aws | us-east-1 | us-east-1a
127.0.0.5 | aws | us-east-2 | us-east-2a
127.0.0.2 | aws | us-east-2 | us-east-2a
127.0.0.8 | aws | us-east-2 | us-east-2a
127.0.0.9 | aws | us-west-2 | us-west-2a
127.0.0.3 | aws | us-west-2 | us-west-2a
127.0.0.6 | aws | us-west-2 | us-west-2a
(9 rows)
Keep in mind that this database is running on a single server and won’t survive a region outage… We’re just “simulating” the fault domains.
And here is the script I use to read down (ie, destroy) the cluster:
# Setup
# Cluster nodes
# Format: IP,cloud.region.zone,base directory path
all_nodes=(
"yugabytedb.tech,aws.us-east-2.us-east-2a,~/yb01"
"127.0.0.1,aws.us-east-2.us-east-2a,~/yb01"
"127.0.0.2,aws.us-east-2.us-east-2b,~/yb02"
"127.0.0.3,aws.us-east-2.us-east-2c,~/yb03"
"127.0.0.4,aws.us-east-1.us-east-1a,~/yb04"
"127.0.0.5,aws.us-east-2.us-east-1a,~/yb05"
"127.0.0.6,aws.us-west-1.us-west-1a,~/yb06"
"127.0.0.7,aws.us-east-1.us-east-1a,~/yb07"
"127.0.0.8,aws.us-east-2.us-east-1a,~/yb08"
"127.0.0.9,aws.us-west-1.us-west-1a,~/yb09"
)
# Destroy all nodes
total_nodes=${#all_nodes[@]}
for (( n=0; n<=$total_nodes; n++ ))
do
printf "Destroying node: %s\n" $(echo ${all_nodes[$n]} | awk -F',' '{ print $1 }')
yugabyted destroy --base_dir=$(echo ${all_nodes[$n]} | awk -F',' '{ print $3 }') > /dev/null
done
[root@localhost yb2]# ./destroy_all.sh
Destroying node: 127.0.0.1
Destroying node: 127.0.0.2
Destroying node: 127.0.0.3
Destroying node: 127.0.0.4
Destroying node: 127.0.0.5
Destroying node: 127.0.0.6
Destroying node: 127.0.0.7
Destroying node: 127.0.0.8
Destroying node: 127.0.0.9
Have Fun!