Benchmarking Geospatial Queries in YugabyteDB

Validating p95 / p99 Latency Without PostGIS (Part 3)

In Part 1 of this series, Geospatial Queries in YugabyteDB Without PostGIS, we showed how common geospatial queries, such as radius (“near-me”) searches and polygon containment, can be implemented in YugabyteDB without PostGIS using a layered filtering strategy:

				
					cells → bounding box → exact distance math
				
			

In Part 2, Generate H3 / Geohash Cells for Fast Geo Filtering, we focused on the mechanics behind that approach:

  • ● generating H3 / geohash cells,

  • ● expanding cells for radius queries, and

  • ● producing SQL-ready inputs that plug directly into YSQL.

This final post answers the last (and most important) question:

  • Does this approach actually scale and what happens to tail latency?

Rather than focusing on averages, this tip benchmarks p95 and p99 latency, which is where geospatial workloads typically fail if candidate filtering is not done correctly.

🎯 Why Tail Latency Matters for Geospatial Queries

Geospatial queries behave differently than many transactional queries:

  • ● Performance is often CPU-bound, not I/O-bound

  • ● The expensive part is trigonometric distance math

  • ● Average latency can look fine while tail latency explodes

If distance calculations run across too many rows, p95 and p99 latency will grow rapidly as data volume increases.

This benchmark validates that cell-based pruning keeps tail latency under control.

🧪 Benchmark Overview

This benchmark compares two query strategies:

  • ❌ Baseline – Run an exact distance calculation across every row.
  • ✅ Optimized – Use:
    • 1. H3 cell filtering

    • 2. Bounding box filtering

    • 3. Exact distance math only on remaining candidates

This is the same strategy introduced in Parts 1 and 2.

🛠️ Test Environment
  • ● VM: 6 vCPU / 11 GB RAM

  • ● YugabyteDB (YSQL)

  • ● Single node

  • ● Query centers randomized per execution

Absolute latencies will vary by hardware and cluster topology. The results below should be interpreted as relative improvements.

🗂️ Schema (recap)
				
					CREATE TABLE geo_points (
  id   BIGSERIAL PRIMARY KEY,
  lat  DOUBLE PRECISION NOT NULL,
  lon  DOUBLE PRECISION NOT NULL,
  cell TEXT NOT NULL
);

CREATE INDEX geo_points_cell_idx
  ON geo_points (cell);
				
			
🔍 Queries Under Test

Baseline (exact distance only)

				
					SELECT count(*)
FROM geo_points
WHERE (
  6371000 * acos(
    cos(radians(:lat)) * cos(radians(lat)) *
    cos(radians(lon) - radians(:lon)) +
    sin(radians(:lat)) * sin(radians(lat))
  )
) <= :radius_m;
				
			

Optimized (cells → bounding box → exact distance)

				
					SELECT count(*)
FROM geo_points
WHERE cell = ANY(:cells)
  AND lat BETWEEN :min_lat AND :max_lat
  AND lon BETWEEN :min_lon AND :max_lon
  AND (
    6371000 * acos(
      cos(radians(:lat)) * cos(radians(lat)) *
      cos(radians(lon) - radians(:lon)) +
      sin(radians(:lat)) * sin(radians(lat))
    )
  ) <= :radius_m;
				
			
📈 Benchmark Results
Candidate set size table (optimized stage 1+2 only)
Rows Avg candidates p50 p95 p99
100k 82.0 81 102 107
1M 842.5 850 904 912

Candidate counts scale with data density, not with table size directly.

Latency (p50 / p95 / p99)
Rows Query p50 (ms) p95 (ms) p99 (ms) Avg (ms)
100k Baseline (exact distance only) 136.72 198.33 219.71 145.01
100k Optimized (cells → bbox → exact) 1.87 2.28 2.37 1.91
1M Baseline (exact distance only) 1339.41 1767.46 1940.50 1382.06
1M Optimized (cells → bbox → exact) 10.88 13.30 13.53 11.02
🚀 Tail-latency improvement

At 100k rows

  • ● p95: ~87× faster
  • ● p99: ~93× faster

At 1M rows

  • ● p95: ~133× faster
  • ● p99: ~143× faster
🧠 What This Demonstrates
  • ● The baseline query scales poorly because every row executes expensive math

  • ● The optimized query scales predictably because:

    • ○ cell filtering collapses millions of rows into hundreds

    • ○ bounding boxes remove obvious misses

    • ○ distance math runs only where it matters

Key takeaway:

  • Tail latency scales with candidate count, not table size, when you prune early.
🧪 Full Benchmark Script (Runnable)

Below is the complete benchmark script used to generate the results above.

What the script does:
  • 1. Creates the table and index

  • 2. Bulk-loads synthetic geo points with H3 cells

  • 3. Benchmarks:

    • ● baseline query

    • ● optimized query

  • 4. Reports:

    • ● candidate-set size

    • ● p50 / p95 / p99 latency

    • ● relative speedup

Save as geo_benchmark.py:
				
					#!/usr/bin/env python3
"""
geo_benchmark.py

End-to-end benchmark for geospatial "near-me" radius queries in YugabyteDB WITHOUT PostGIS.

What it does:
  1) Creates geo_points table + index
  2) Loads synthetic lat/lon points + H3 cell (FAST bulk insert via execute_values)
  3) Benchmarks two queries:
       A) Baseline: exact distance check across all rows
       B) Optimized: cell filter -> bounding box -> exact distance
  4) Reports p50/p95/p99 latencies + candidate-set stats

Requirements:
  pip install h3 psycopg2-binary
"""

from __future__ import annotations

import argparse
import math
import random
import statistics
import time
from typing import List, Tuple

import h3
import psycopg2
from psycopg2.extras import execute_values


# ----------------------------
# H3 compatibility helpers
# ----------------------------
def h3_disk(cell: str, k: int):
    """Return the set of H3 cells within k steps of `cell` (h3 v4+: grid_disk; older: k_ring)."""
    if hasattr(h3, "grid_disk"):
        return h3.grid_disk(cell, k)
    if hasattr(h3, "k_ring"):
        return h3.k_ring(cell, k)
    raise RuntimeError("Unsupported h3 library: missing grid_disk/k_ring APIs")


def latlon_to_cell(lat: float, lon: float, res: int) -> str:
    return h3.latlng_to_cell(lat, lon, res)


# ----------------------------
# Geo helpers
# ----------------------------
def bounding_box(center_lat: float, center_lon: float, radius_m: float) -> Tuple[float, float, float, float]:
    """
    Approximate bounding box for a radius in meters.
    lat delta: meters / 111_320
    lon delta: meters / (111_320 * cos(lat))
    """
    lat_rad = math.radians(center_lat)
    dlat = radius_m / 111_320.0
    dlon = radius_m / (111_320.0 * max(math.cos(lat_rad), 1e-12))
    return center_lat - dlat, center_lat + dlat, center_lon - dlon, center_lon + dlon


def rand_point_near(center_lat: float, center_lon: float, spread_deg: float) -> Tuple[float, float]:
    """Uniform random point in a square around center (synthetic but reproducible)."""
    return (
        center_lat + (random.random() - 0.5) * spread_deg,
        center_lon + (random.random() - 0.5) * spread_deg,
    )


# ----------------------------
# Stats helpers
# ----------------------------
def percentile(sorted_vals: List[float], p: float) -> float:
    """Nearest-rank percentile (deterministic). Input must be sorted."""
    if not sorted_vals:
        return float("nan")
    if p <= 0:
        return sorted_vals[0]
    if p >= 100:
        return sorted_vals[-1]
    k = math.ceil((p / 100.0) * len(sorted_vals)) - 1
    k = max(0, min(k, len(sorted_vals) - 1))
    return sorted_vals[k]


def summarize_latencies_ms(latencies_ms: List[float]) -> str:
    vals = sorted(latencies_ms)
    return (
        f"n={len(vals)} "
        f"p50={percentile(vals, 50):.2f}ms "
        f"p95={percentile(vals, 95):.2f}ms "
        f"p99={percentile(vals, 99):.2f}ms "
        f"avg={statistics.fmean(vals):.2f}ms"
    )


# ----------------------------
# DB setup / load
# ----------------------------
DDL = """
CREATE TABLE IF NOT EXISTS geo_points (
  id   BIGSERIAL PRIMARY KEY,
  lat  DOUBLE PRECISION NOT NULL,
  lon  DOUBLE PRECISION NOT NULL,
  cell TEXT NOT NULL
);
"""

INDEX = "CREATE INDEX IF NOT EXISTS geo_points_cell_idx ON geo_points (cell);"


def connect(args) -> psycopg2.extensions.connection:
    kwargs = dict(
        host=args.host,
        port=args.port,
        dbname=args.dbname,
        user=args.user,
    )
    if args.password:
        kwargs["password"] = args.password
    return psycopg2.connect(**kwargs)


def prepare_schema(cur):
    cur.execute(DDL)
    cur.execute(INDEX)


def load_data(cur, *, rows: int, center_lat: float, center_lon: float, spread_deg: float,
              h3_res: int, truncate: bool, batch_size: int):
    """
    Fast bulk load using psycopg2.extras.execute_values().
    """
    if truncate:
        cur.execute("TRUNCATE TABLE geo_points;")

    inserted = 0
    while inserted < rows:
        n = min(batch_size, rows - inserted)
        batch = []
        for _ in range(n):
            lat, lon = rand_point_near(center_lat, center_lon, spread_deg)
            cell = latlon_to_cell(lat, lon, h3_res)
            batch.append((lat, lon, cell))

        execute_values(
            cur,
            "INSERT INTO geo_points (lat, lon, cell) VALUES %s",
            batch,
            page_size=len(batch),
        )

        inserted += n

    cur.execute("ANALYZE geo_points;")


# ----------------------------
# Queries
# ----------------------------
BASELINE_SQL = """
SELECT count(*)
FROM geo_points
WHERE (
  6371000 * acos(
    cos(radians(%s)) * cos(radians(lat)) *
    cos(radians(lon) - radians(%s)) +
    sin(radians(%s)) * sin(radians(lat))
  )
) <= %s;
"""

CANDIDATES_SQL = """
SELECT count(*)
FROM geo_points
WHERE cell = ANY(%s)
  AND lat BETWEEN %s AND %s
  AND lon BETWEEN %s AND %s;
"""

OPTIMIZED_SQL = """
SELECT count(*)
FROM geo_points
WHERE cell = ANY(%s)
  AND lat BETWEEN %s AND %s
  AND lon BETWEEN %s AND %s
  AND (
    6371000 * acos(
      cos(radians(%s)) * cos(radians(lat)) *
      cos(radians(lon) - radians(%s)) +
      sin(radians(%s)) * sin(radians(lat))
    )
  ) <= %s;
"""


def run_benchmark(cur, *, sql: str, param_fn, iterations: int, warmup: int) -> List[float]:
    for _ in range(warmup):
        params = param_fn()
        cur.execute(sql, params)
        _ = cur.fetchone()

    lat_ms: List[float] = []
    for _ in range(iterations):
        params = param_fn()
        t0 = time.perf_counter()
        cur.execute(sql, params)
        _ = cur.fetchone()
        t1 = time.perf_counter()
        lat_ms.append((t1 - t0) * 1000.0)
    return lat_ms


# ----------------------------
# Main
# ----------------------------
def main():
    ap = argparse.ArgumentParser(description="Benchmark geospatial near-me queries in YugabyteDB without PostGIS.")
    ap.add_argument("--host", default="127.0.0.1")
    ap.add_argument("--port", type=int, default=5433)
    ap.add_argument("--dbname", default="yugabyte")
    ap.add_argument("--user", default="yugabyte")
    ap.add_argument("--password", default="")

    ap.add_argument("--seed", type=int, default=42)

    # Data generation
    ap.add_argument("--rows", type=int, default=1_000_000)
    ap.add_argument("--truncate", action="store_true")
    ap.add_argument("--center-lat", type=float, default=40.4406)
    ap.add_argument("--center-lon", type=float, default=-79.9959)
    ap.add_argument("--spread-deg", type=float, default=1.0)
    ap.add_argument("--h3-res", type=int, default=9)
    ap.add_argument("--k", type=int, default=2)
    ap.add_argument("--batch-size", type=int, default=20_000)

    # Query workload
    ap.add_argument("--radius-m", type=float, default=2000.0)
    ap.add_argument("--iterations", type=int, default=300)
    ap.add_argument("--warmup", type=int, default=20)
    ap.add_argument("--candidate-samples", type=int, default=100)
    ap.add_argument("--random-centers", action="store_true")

    args = ap.parse_args()
    random.seed(args.seed)

    print("=== Config ===")
    print(f"DB: {args.host}:{args.port}/{args.dbname} user={args.user}")
    print(f"Load: rows={args.rows} truncate={args.truncate} center=({args.center_lat},{args.center_lon}) spread_deg={args.spread_deg}")
    print(f"H3: res={args.h3_res} k={args.k}")
    print(f"Workload: radius_m={args.radius_m} warmup={args.warmup} iterations={args.iterations} random_centers={args.random_centers}")
    print(f"Loader: batch_size={args.batch_size}")
    print()

    conn = connect(args)
    conn.autocommit = True

    with conn.cursor() as cur:
        prepare_schema(cur)

        print("=== Loading data ===")
        t0 = time.perf_counter()
        load_data(
            cur,
            rows=args.rows,
            center_lat=args.center_lat,
            center_lon=args.center_lon,
            spread_deg=args.spread_deg,
            h3_res=args.h3_res,
            truncate=args.truncate,
            batch_size=args.batch_size,
        )
        t1 = time.perf_counter()
        print(f"Loaded {args.rows} rows in {(t1 - t0):.2f}s")
        print()

        def pick_center():
            if not args.random_centers:
                return args.center_lat, args.center_lon
            return rand_point_near(args.center_lat, args.center_lon, args.spread_deg)

        def baseline_params():
            clat, clon = pick_center()
            return (clat, clon, clat, args.radius_m)

        def optimized_params():
            clat, clon = pick_center()
            center_cell = latlon_to_cell(clat, clon, args.h3_res)
            cells = sorted(h3_disk(center_cell, args.k))
            min_lat, max_lat, min_lon, max_lon = bounding_box(clat, clon, args.radius_m)
            return (cells, min_lat, max_lat, min_lon, max_lon, clat, clon, clat, args.radius_m)

        def candidates_params():
            clat, clon = pick_center()
            center_cell = latlon_to_cell(clat, clon, args.h3_res)
            cells = sorted(h3_disk(center_cell, args.k))
            min_lat, max_lat, min_lon, max_lon = bounding_box(clat, clon, args.radius_m)
            return (cells, min_lat, max_lat, min_lon, max_lon)

        print("=== Candidate-set sampling (optimized stage 1+2 only) ===")
        cand_counts = []
        for _ in range(args.candidate_samples):
            cur.execute(CANDIDATES_SQL, candidates_params())
            cand_counts.append(cur.fetchone()[0])

        cand_counts_sorted = sorted(cand_counts)
        print(
            f"samples={args.candidate_samples} "
            f"avg_candidates={statistics.fmean(cand_counts_sorted):.1f} "
            f"p50={percentile(cand_counts_sorted, 50):.0f} "
            f"p95={percentile(cand_counts_sorted, 95):.0f} "
            f"p99={percentile(cand_counts_sorted, 99):.0f}"
        )
        print()

        print("=== Benchmark A: Baseline (exact distance only) ===")
        baseline_lat = run_benchmark(
            cur,
            sql=BASELINE_SQL,
            param_fn=baseline_params,
            iterations=args.iterations,
            warmup=args.warmup,
        )
        print(summarize_latencies_ms(baseline_lat))
        print()

        print("=== Benchmark B: Optimized (cells → bbox → exact distance) ===")
        optimized_lat = run_benchmark(
            cur,
            sql=OPTIMIZED_SQL,
            param_fn=optimized_params,
            iterations=args.iterations,
            warmup=args.warmup,
        )
        print(summarize_latencies_ms(optimized_lat))
        print()

        b_p95 = percentile(sorted(baseline_lat), 95)
        o_p95 = percentile(sorted(optimized_lat), 95)
        b_p99 = percentile(sorted(baseline_lat), 99)
        o_p99 = percentile(sorted(optimized_lat), 99)

        print("=== Summary ===")
        print(f"Baseline   p95={b_p95:.2f}ms  p99={b_p99:.2f}ms")
        print(f"Optimized  p95={o_p95:.2f}ms  p99={o_p99:.2f}ms")
        if o_p95 > 0:
            print(f"p95 speedup: {b_p95 / o_p95:.1f}x")
        if o_p99 > 0:
            print(f"p99 speedup: {b_p99 / o_p99:.1f}x")
        print()

    conn.close()


if __name__ == "__main__":
    main()

				
			
▶️ How to Run the Benchmark
Install dependencies
				
					pip install h3 psycopg2-binary
				
			
Small test
				
					python geo_benchmark.py \
  --rows 100000 \
  --truncate \
  --iterations 100 \
  --warmup 20 \
  --spread-deg 0.5 \
  --random-centers
				
			
Larger run (used for this post)
				
					python geo_benchmark.py \
  --rows 1000000 \
  --truncate \
  --iterations 50 \
  --warmup 10 \
  --spread-deg 0.5 \
  --random-centers
				
			
🧾 Summary: Closing the Loop

Across these three tips, we showed how YugabyteDB can support real-world geospatial workloads without PostGIS:

The result is a geospatial strategy that is:

  • ● scalable

  • ● predictable

  • ● transparent

  • ● and fully controllable using standard SQL

Have Fun!