Understanding Smart-Driver Pooling with Multiple Connection Pools (and Important Known Issues)

In a [previous tip], we enforced a global max connection limit while load balancing Npgsql connections across a YugabyteDB cluster.

That earlier post focused on controlling total concurrency across the cluster.

In this tip, we go one level deeper.

Instead of a single pool, we explore what happens when an application uses multiple connection strings, multiple schemas, and multiple connection pools, all while using the YugabyteDB Npgsql Smart Driver.

This lets us answer questions like:

  • ● How do multiple pools behave when load balancing is enabled?

  • ● Can each pool have its own Maximum Pool Size?

  • ● Does search_path remain isolated per connection?

  • ● What happens when YSQL Connection Manager is enabled?

  • ● How can we prove a pooling bug is fixed once a patch is released?

⚠️ Important

This tip intentionally documents current known issues in smart-driver pooling that customers have encountered in the field.

These issues are actively being worked on by Yugabyte engineering.

Once fixes are available, this blog will be updated, and you can simply re-run the program below to verify that the bug has been squashed.

🎯 Goal of This Demo

We want to simulate a realistic schema-based multi-tenant application, where:

  • ● Each tenant uses:

    • ○ A different schema

    • ○ A different connection string

    • ○ A different connection pool

  • ● Each pool has its own:

    • Maximum Pool Size

    • search_path

  • ● All pools:

    • ○ Connect to the same YugabyteDB cluster

    • ○ Use client-side load balancing via the smart driver

    • ○ Are active in parallel

Example configuration:

Pool Schema MaxPoolSize
Conn1 tenant1 2
Conn2 tenant2 4
Conn3 tenant3 8

All pools query the same table name (tenant_data), but rely on search_path to resolve it to the correct schema.

We then intentionally overload the pools with many concurrent workers to observe:

  • ● Whether pool limits are respected

  • ● Whether connections stay isolated

  • ● Whether search_path is honored

  • ● How behavior changes when Connection Manager is enabled

❓Why the YugabyteDB Npgsql Smart Driver?

The YugabyteDB Npgsql Smart Driver (NpgsqlYugabyteDB) extends the standard PostgreSQL Npgsql driver with cluster awareness.

Specifically, it adds:

  • ● Client-side load balancing

    • ○ Connections are distributed across available YSQL nodes

    • ○ No external load balancer required

  • ● Topology awareness

    • ○ The driver can adapt as nodes are added or removed

  • ● Transparent integration

    • ○ Works with existing Npgsql APIs

    • ○ No application-level routing logic needed

This makes the smart driver ideal for:

  • Horizontally scalable YugabyteDB clusters

  • Applications that want simple, built-in load balancing

  • Scenarios where connection behavior must be observable and testable

However, as with any advanced client-side feature, it’s important to validate behavior under load… which is exactly what this tip is designed to do.

☑︎ Prerequisites

You’ll need:

  • ● A running YugabyteDB cluster with YSQL enabled with the YugabyteDB Connection Manager enabled

  • ● Network access from your dev box to the cluster.

  • ● .NET 6+ SDK.

  • ● The YugabyteDB Npgsql Smart Driver (NpgsqlYugabyteDB package).

We’ll assume:

  • ● Database: yugabyte

  • ● User: yugabyte

  • ● Password: yugabyte

  • ● Port: 5433

(You can tweak all of these via command-line arguments.)

🏗 Step 1: Database Setup (Schemas + Same-Named Tables)

In ysqlsh, create three schemas and a tenant_data table in each one:

				
					CREATE SCHEMA IF NOT EXISTS tenant1;
CREATE SCHEMA IF NOT EXISTS tenant2;
CREATE SCHEMA IF NOT EXISTS tenant3;

CREATE TABLE IF NOT EXISTS tenant1.tenant_data (id INT PRIMARY KEY, value TEXT);
CREATE TABLE IF NOT EXISTS tenant2.tenant_data (id INT PRIMARY KEY, value TEXT);
CREATE TABLE IF NOT EXISTS tenant3.tenant_data (id INT PRIMARY KEY, value TEXT);

INSERT INTO tenant1.tenant_data VALUES (1, 'Hello from tenant1');
INSERT INTO tenant2.tenant_data VALUES (1, 'Hello from tenant2');
INSERT INTO tenant3.tenant_data VALUES (1, 'Hello from tenant3');
				
			

This setup is critical because it allows us to prove correctness:

  • ● If search_path is honored, each pool will read its own value.

  • ● If connections leak across pools or schemas, we’ll see it immediately.

Quick check:

				
					SET search_path TO tenant1, public;
SELECT current_schema(), * FROM tenant_data;

SET search_path TO tenant2, public;
SELECT current_schema(), * FROM tenant_data;

SET search_path TO tenant3, public;
SELECT current_schema(), * FROM tenant_data;
				
			

You should see three different value strings depending on the schema.

🤓 Step 2: Create the C# Project and Add the Smart Driver

Create a new console app and add the YugabyteDB Npgsql Smart Driver:

				
					dotnet new console -n YbSmartDriverSearchPathDemo
cd YbSmartDriverSearchPathDemo

dotnet add package NpgsqlYugabyteDB
				
			

Per the docs, you using YBNpgsql; and use NpgsqlConnection from that namespace.

🧠 Step 3: How the Program Works (Why These Examples Matter)

This program is intentionally structured to surface pooling bugs clearly.

Key design choices:

  • ● All pools run in parallel

    • ○ We do not open Conn1, then Conn2, then Conn3

    • ○ All workers are active at the same time

  • ● Workers outnumber pool size

    • ○ This forces contention and exposes pooling behavior

  • ● Connections are held open

    • ○ Each worker sleeps while holding the connection

    • ○ Makes concurrency visible

  • ● Explicit instrumentation

    • ○ Tracks:

      • ▪ Workers started / succeeded / failed

      • ▪ Max concurrent active connections

      • ▪ Schema matches vs mismatches

  • ● Log file + summary

    • ○ Log file shows per-worker behavior

    • ○ Summary shows whether invariants were violated

📄 Step 4: Full Program (Program.cs)

Below is a complete Program.cs that you can drop into the project.  Simply relace the code existing Program.cs file in the YbSmartDriverSearchPathDemo directory with this code.

				
					// Program.cs
// YugabyteDB Npgsql Smart Driver - Multi Pool / Search Path Demo
//
// Demonstrates:
//  1) YugabyteDB Npgsql Smart Driver load balancing (Load Balance Hosts=true)
//  2) Multiple connection strings => (ideally) multiple pools
//  3) Forced application_name AFTER OpenAsync (Option B - decisive)
//  4) Forced search_path per pool + verification by reading same table name from different schemas
//
// Logging behavior:
//  - Header + pool config: console + file
//  - Per-worker/per-iteration details: file only
//  - Summary: console only

using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using YBNpgsql; // YugabyteDB Npgsql Smart Driver

internal enum PoolBehavior
{
    Block,
    Failfast
}

internal sealed class PoolConfig
{
    public string Name { get; init; } = "";
    public string Schema { get; init; } = "";
    public int MaxPoolSize { get; init; }
    public string AppName { get; init; } = "";
    public string ConnectionString { get; init; } = "";
}

internal sealed class PoolStats
{
    public int WorkersStarted;
    public int WorkersSucceeded;
    public int WorkersFailed;

    public long SchemaMatches;
    public long SchemaMismatches;

    private int _activeNow;
    public int MaxConcurrentActive;

    public void IncActive()
    {
        var now = Interlocked.Increment(ref _activeNow);
        UpdateMax(now);
    }

    public void DecActive()
    {
        Interlocked.Decrement(ref _activeNow);
    }

    public int ActiveNow => Volatile.Read(ref _activeNow);

    private void UpdateMax(int now)
    {
        while (true)
        {
            var max = Volatile.Read(ref MaxConcurrentActive);
            if (now <= max) return;
            if (Interlocked.CompareExchange(ref MaxConcurrentActive, now, max) == max) return;
        }
    }
}

internal sealed class DualLogger : IDisposable
{
    private readonly StreamWriter _file;
    private readonly object _lock = new();

    public DualLogger(string logFilePath)
    {
        _file = new StreamWriter(File.Open(logFilePath, FileMode.Create, FileAccess.Write, FileShare.Read))
        {
            AutoFlush = true,
            NewLine = "\n",
        };
    }

    // goes to console + file
    public void Info(string msg)
    {
        lock (_lock)
        {
            Console.WriteLine(msg);
            _file.WriteLine(msg);
        }
    }

    // goes to file only
    public void Verbose(string msg)
    {
        lock (_lock)
        {
            _file.WriteLine(msg);
        }
    }

    public void Dispose()
    {
        lock (_lock)
        {
            _file.Flush();
            _file.Dispose();
        }
    }
}

public static class Program
{
    public static async Task<int> Main(string[] args)
    {
        // Defaults
        string host = GetArg(args, "--host", "127.0.0.1");
        int port = GetArgInt(args, "--port", 5433);
        string database = GetArg(args, "--db", "yugabyte");
        string username = GetArg(args, "--user", "yugabyte");
        string password = GetArg(args, "--pass", "yugabyte");

        int workersPerPool = GetArgInt(args, "--workers-per-pool", 40);

        int max1 = GetArgInt(args, "--maxpool-tenant1", 2);
        int max2 = GetArgInt(args, "--maxpool-tenant2", 4);
        int max3 = GetArgInt(args, "--maxpool-tenant3", 8);

        int sleepSeconds = GetArgInt(args, "--sleep", 10);

        // Npgsql "Timeout" controls connect + pool wait time. 0 = wait indefinitely for pool.
        int timeoutSeconds = GetArgInt(args, "--timeout", 0);

        string behaviorRaw = GetArg(args, "--pool-behavior", "block").Trim().ToLowerInvariant();
        PoolBehavior behavior = behaviorRaw switch
        {
            "block" => PoolBehavior.Block,
            "failfast" => PoolBehavior.Failfast,
            _ => PoolBehavior.Block
        };

        int iterations = GetArgInt(args, "--iterations", 1);
        int iterSleepMs = GetArgInt(args, "--iter-sleep-ms", 0);

        string logFile = GetArg(args, "--log-file",
            $"yb_smart_driver_search_path_demo_{DateTime.Now:yyyyMMdd_HHmmss}.log");

        using var logger = new DualLogger(logFile);

        // Header
        logger.Info("=== YugabyteDB Npgsql Smart Driver - Multi Pool / Search Path Demo ===");
        logger.Info($"Args              : {string.Join(" ", args.Select(a => a.Contains(' ') ? $"\"{a}\"" : a))}");
        logger.Info($"Host              : {host}");
        logger.Info($"Port              : {port}");
        logger.Info($"Database          : {database}");
        logger.Info($"User              : {username}");
        logger.Info($"Workers per pool  : {workersPerPool}");
        logger.Info($"Sleep (seconds)   : {sleepSeconds}");
        logger.Info($"Timeout (seconds) : {timeoutSeconds}  (0 = wait indefinitely for a pooled connection)");
        logger.Info($"Pool behavior     : {behavior.ToString().ToLowerInvariant()}  (block | failfast)");
        logger.Info($"Iterations        : {iterations}");
        logger.Info($"Iter sleep (ms)   : {iterSleepMs}");
        logger.Info($"Log file          : {logFile}");
        logger.Info("");

        var pools = BuildPools(host, port, database, username, password, timeoutSeconds, max1, max2, max3);

        foreach (var p in pools)
            logger.Info($"Pool {p.Name}: schema={p.Schema}, MaxPoolSize={p.MaxPoolSize}, Application Name={p.AppName}");

        logger.Info("");
        logger.Info("Running workers across ALL pools in parallel...");
        logger.Info("");

        var statsByPool = pools.ToDictionary(p => p.Name, _ => new PoolStats());

        var tasks = new List<Task>();
        foreach (var pool in pools)
        {
            var stats = statsByPool[pool.Name];
            for (int w = 0; w < workersPerPool; w++)
            {
                int workerId = w;
                Interlocked.Increment(ref stats.WorkersStarted);
                tasks.Add(RunWorkerAsync(pool, stats, workerId, iterations, iterSleepMs, sleepSeconds, logger));
            }
        }

        await Task.WhenAll(tasks);

        PrintSummaryToConsoleOnly(pools, statsByPool);

        Console.WriteLine();
        Console.WriteLine("Demo complete.");
        Console.WriteLine($"See the log file for per-worker details:\n  {logFile}");
        return 0;
    }

    private static List<PoolConfig> BuildPools(
        string host,
        int port,
        string database,
        string username,
        string password,
        int timeoutSeconds,
        int max1,
        int max2,
        int max3)
    {
        // Use a single seed host; smart driver should discover topology if supported in this environment.
        string common =
            $"Host={host};" +
            $"Port={port};" +
            $"Database={database};" +
            $"Username={username};" +
            $"Password={EscapeConnStringValue(password)};" +
            $"Load Balance Hosts=true;" +
            $"Timeout={timeoutSeconds};" +
            $"No Reset On Close=true;"; // helps surface session-state reuse issues

        return new List<PoolConfig>
        {
            new PoolConfig
            {
                Name = "Conn1",
                Schema = "tenant1",
                MaxPoolSize = max1,
                AppName = "yb-smart-pool-tenant1",
                ConnectionString = common +
                                   $"Application Name=yb-smart-pool-tenant1;" +
                                   $"Maximum Pool Size={max1};"
            },
            new PoolConfig
            {
                Name = "Conn2",
                Schema = "tenant2",
                MaxPoolSize = max2,
                AppName = "yb-smart-pool-tenant2",
                ConnectionString = common +
                                   $"Application Name=yb-smart-pool-tenant2;" +
                                   $"Maximum Pool Size={max2};"
            },
            new PoolConfig
            {
                Name = "Conn3",
                Schema = "tenant3",
                MaxPoolSize = max3,
                AppName = "yb-smart-pool-tenant3",
                ConnectionString = common +
                                   $"Application Name=yb-smart-pool-tenant3;" +
                                   $"Maximum Pool Size={max3};"
            }
        };
    }

    private static async Task RunWorkerAsync(
        PoolConfig pool,
        PoolStats stats,
        int workerId,
        int iterations,
        int iterSleepMs,
        int sleepSeconds,
        DualLogger logger)
    {
        string ts() => DateTime.Now.ToString("HH:mm:ss.fff", CultureInfo.InvariantCulture);

        bool opened = false;

        try
        {
            await using var conn = new NpgsqlConnection(pool.ConnectionString);

            await conn.OpenAsync();

            opened = true;
            stats.IncActive();

            // OPTION B: force application_name explicitly after OpenAsync
            await using (var setApp = new NpgsqlCommand($"SET application_name TO '{SqlLiteral(pool.AppName)}';", conn))
                await setApp.ExecuteNonQueryAsync();

            // Force schema via search_path
            await using (var setPath = new NpgsqlCommand($"SET search_path TO {SqlIdent(pool.Schema)}, public;", conn))
                await setPath.ExecuteNonQueryAsync();

            // Verify server-side view (file only)
            string appNameNow = await ScalarStringAsync(conn, "SHOW application_name;");
            string schemaNow = await ScalarStringAsync(conn, "SELECT current_schema();");
            logger.Verbose($"{ts()} [POOL {pool.Name}] Worker {workerId:D2} OPEN ok => app='{appNameNow}', schema='{schemaNow}', activeNow={stats.ActiveNow}");

            long localMatches = 0;
            long localMismatches = 0;

            for (int i = 0; i < iterations; i++)
            {
                await using var cmd = new NpgsqlCommand("SELECT current_schema(), value FROM tenant_data WHERE id = 1;", conn);

                await using var reader = await cmd.ExecuteReaderAsync();
                if (await reader.ReadAsync())
                {
                    string gotSchema = reader.GetString(0);
                    string value = reader.IsDBNull(1) ? "<null>" : reader.GetString(1);

                    bool match = string.Equals(gotSchema, pool.Schema, StringComparison.OrdinalIgnoreCase);
                    if (match) localMatches++;
                    else localMismatches++;

                    logger.Verbose($"{ts()} [POOL {pool.Name}] Worker {workerId:D2} iter={i:D3} => schema={gotSchema}, expected={pool.Schema}, match={match}, value='{value}'");
                }
                else
                {
                    localMismatches++;
                    logger.Verbose($"{ts()} [POOL {pool.Name}] Worker {workerId:D2} iter={i:D3} => ERROR: no rows returned");
                }

                if (iterSleepMs > 0)
                    await Task.Delay(iterSleepMs);
            }

            if (sleepSeconds > 0)
            {
                logger.Verbose($"{ts()} [POOL {pool.Name}] Worker {workerId:D2} sleeping for {sleepSeconds}s with an open connection (activeNow={stats.ActiveNow}).");
                await Task.Delay(TimeSpan.FromSeconds(sleepSeconds));
            }

            Interlocked.Add(ref stats.SchemaMatches, localMatches);
            Interlocked.Add(ref stats.SchemaMismatches, localMismatches);

            Interlocked.Increment(ref stats.WorkersSucceeded);
        }
        catch (Exception ex)
        {
            Interlocked.Increment(ref stats.WorkersFailed);
            logger.Verbose($"[POOL {pool.Name}] Worker {workerId:D2} ERROR: {ex.Message}");
        }
        finally
        {
            // Accurate: decrement only if OpenAsync succeeded and we incremented.
            if (opened)
                stats.DecActive();
        }
    }

    private static async Task<string> ScalarStringAsync(NpgsqlConnection conn, string sql)
    {
        await using var cmd = new NpgsqlCommand(sql, conn);
        var obj = await cmd.ExecuteScalarAsync();
        return obj?.ToString() ?? "<null>";
    }

    private static void PrintSummaryToConsoleOnly(List<PoolConfig> pools, Dictionary<string, PoolStats> statsByPool)
    {
        Console.WriteLine();
        Console.WriteLine("=== Summary Per Pool (App-side) ===");

        foreach (var pool in pools)
        {
            var s = statsByPool[pool.Name];

            int maxActive = s.MaxConcurrentActive;
            bool exceeded = maxActive > pool.MaxPoolSize;

            long matches = Interlocked.Read(ref s.SchemaMatches);
            long mismatches = Interlocked.Read(ref s.SchemaMismatches);

            Console.WriteLine($"Pool {pool.Name} (schema={pool.Schema})");
            Console.WriteLine($"  MaxPoolSize configured          : {pool.MaxPoolSize}");
            Console.WriteLine($"  Workers started                 : {s.WorkersStarted}");
            Console.WriteLine($"  Workers succeeded               : {s.WorkersSucceeded}");
            Console.WriteLine($"  Workers failed (timeouts/etc.)  : {s.WorkersFailed}");
            Console.WriteLine($"  Max concurrent active workers   : {maxActive}");
            Console.WriteLine($"  Did MaxActive exceed MaxPool?   : {(exceeded ? "YES (unexpected!)" : "NO (respected)")}");
            Console.WriteLine($"  Schema matches (search_path OK) : {matches}");
            Console.WriteLine($"  Schema mismatches               : {mismatches}");
            Console.WriteLine($"  Did all successful reads use the expected schema? : {(mismatches == 0 && matches > 0 ? "YES" : "NO (check log file)")}");
            Console.WriteLine();
        }
    }

    // Args helpers
    private static string GetArg(string[] args, string name, string defaultValue)
    {
        for (int i = 0; i < args.Length; i++)
        {
            if (string.Equals(args[i], name, StringComparison.OrdinalIgnoreCase))
                return (i + 1 < args.Length) ? args[i + 1] : defaultValue;
        }
        return defaultValue;
    }

    private static int GetArgInt(string[] args, string name, int defaultValue)
    {
        var s = GetArg(args, name, defaultValue.ToString(CultureInfo.InvariantCulture));
        return int.TryParse(s, NumberStyles.Integer, CultureInfo.InvariantCulture, out var v) ? v : defaultValue;
    }

    // SQL helpers
    private static string SqlIdent(string ident) => "\"" + ident.Replace("\"", "\"\"") + "\"";
    private static string SqlLiteral(string s) => s.Replace("'", "''");

    private static string EscapeConnStringValue(string s)
    {
        bool needsQuotes =
            s.Contains(';') ||
            s.StartsWith(' ') ||
            s.EndsWith(' ') ||
            s.Contains('\'') ||
            s.Contains('"');

        if (!needsQuotes) return s;
        return $"'{s.Replace("'", "''")}'";
    }
}
				
			
▶️ Step 5: How to Run the Program

Run the program as shown below, modifying the parameters to match your environment:

				
					dotnet run -- \
  --host 172.161.19.214 \
  --port 5433 \
  --db yugabyte \
  --user yugabyte \
  --pass 'Yugabyte123!' \
  --workers-per-pool 40 \
  --maxpool-tenant1 2 \
  --maxpool-tenant2 4 \
  --maxpool-tenant3 8 \
  --iterations 50 \
  --iter-sleep-ms 200 \
  --sleep 10 \
  --timeout 0 \
  --pool-behavior block \
  --log-file aws_block_mode_iterations.log
				
			

The total runtime is roughly:

				
					(workers_per_pool / MaxPoolSize) × sleep_seconds
				
			

What this run is doing

  • This single run exercises everything we care about:
  • Creates three different connection strings (three “tenants” / pools)
  • Forces application_name after OpenAsync() so sessions are easy to observe
  • Sets search_path per tenant and repeatedly reads a table named tenant_data
  • Drives contention by running 40 workers per pool and holding connections open (–sleep 10)
  • Logs per-worker details to aws_block_mode_iterations.log
  • Prints a short per-pool summary to the console

Expected results (what “correct” looks like):

  • ● Pool max limits should be honored per pool
    • ○ In the summary, you should see:
      • ▪ Conn1 Max concurrent active workers ≤ 2
      • ▪ Conn2 Max concurrent active workers ≤ 4
      • ▪ Conn3 Max concurrent active workers ≤ 8
  • search_path should always be honored

    • ○ You should see:

      • Schema mismatches = 0

      • And the final line should indicate all successful reads used the expected schema.

Actual run:

				
					[ec2-user@ip-172-161-29-133 YbSmartDriverSearchPathDemo]$ dotnet run -- --host 172.161.19.214 --port 5433 --db yugabyte --user yugabyte --pass 'Yugabyte123!' --workers-per-pool 40 --maxpool-tenant1 2 --maxpool-tenant2 4 --maxpool-tenant3 8 --iterations 50 --iter-sleep-ms 200 --sleep 10 --timeout 0 --pool-behavior block --log-file aws_block_mode_iterations.log
=== YugabyteDB Npgsql Smart Driver - Multi Pool / Search Path Demo ===
Args              : --host 172.161.19.214 --port 5433 --db yugabyte --user yugabyte --pass Yugabyte123! --workers-per-pool 40 --maxpool-tenant1 2 --maxpool-tenant2 4 --maxpool-tenant3 8 --iterations 50 --iter-sleep-ms 200 --sleep 10 --timeout 0 --pool-behavior block --log-file aws_block_mode_iterations.log
Host              : 172.161.19.214
Port              : 5433
Database          : yugabyte
User              : yugabyte
Workers per pool  : 40
Sleep (seconds)   : 10
Timeout (seconds) : 0  (0 = wait indefinitely for a pooled connection)
Pool behavior     : block  (block | failfast)
Iterations        : 50
Iter sleep (ms)   : 200
Log file          : aws_block_mode_iterations.log

Pool Conn1: schema=tenant1, MaxPoolSize=2, Application Name=yb-smart-pool-tenant1
Pool Conn2: schema=tenant2, MaxPoolSize=4, Application Name=yb-smart-pool-tenant2
Pool Conn3: schema=tenant3, MaxPoolSize=8, Application Name=yb-smart-pool-tenant3

Running workers across ALL pools in parallel...


=== Summary Per Pool (App-side) ===
Pool Conn1 (schema=tenant1)
  MaxPoolSize configured          : 2
  Workers started                 : 40
  Workers succeeded               : 40
  Workers failed (timeouts/etc.)  : 0
  Max concurrent active workers   : 6
  Did MaxActive exceed MaxPool?   : YES (unexpected!)
  Schema matches (search_path OK) : 2000
  Schema mismatches               : 0
  Did all successful reads use the expected schema? : YES

Pool Conn2 (schema=tenant2)
  MaxPoolSize configured          : 4
  Workers started                 : 40
  Workers succeeded               : 40
  Workers failed (timeouts/etc.)  : 0
  Max concurrent active workers   : 6
  Did MaxActive exceed MaxPool?   : YES (unexpected!)
  Schema matches (search_path OK) : 2000
  Schema mismatches               : 0
  Did all successful reads use the expected schema? : YES

Pool Conn3 (schema=tenant3)
  MaxPoolSize configured          : 8
  Workers started                 : 40
  Workers succeeded               : 40
  Workers failed (timeouts/etc.)  : 0
  Max concurrent active workers   : 7
  Did MaxActive exceed MaxPool?   : NO (respected)
  Schema matches (search_path OK) : 2000
  Schema mismatches               : 0
  Did all successful reads use the expected schema? : YES


Demo complete.
See the log file for per-worker details:
  aws_block_mode_iterations.log
				
			

Takeaways

  • Correctness: search_path isolation looks solid (0 mismatches, reads always came from the expected schema).

  • Stability: all workers succeeded (no timeouts or failures).

  • Bug reproduced: per-pool max sizing is not consistently honored (Conn1 and Conn2 exceeded their configured MaxPoolSize).

This aligns with a real, customer-reported issue and is actively being worked on. As soon as the fix is available and applied to your environment, you can rerun this same exact command to verify the bug is squashed… your per-pool MaxActive values should then remain within 2/4/8.

🏁 Conclusion

This single example provides a clean, repeatable way to validate two key behaviors of the YugabyteDB Npgsql Smart Driver:

  • ● Tenant correctness via search_path (working today), and

  • ● Per-connection-string pool isolation and sizing (currently inconsistent, fix in progress).

Once the driver fix lands, rerun this command and confirm the summary shows:

  • tenant1 MaxActive ≤ 2

  • tenant2 MaxActive ≤ 4

  • tenant3 MaxActive ≤ 8

That makes this demo both a great troubleshooting tool now and a ready-made regression test later.

Have Fun!

Somewhere between shovel push #3 and #47, I realized my wife has been right all along…

California is starting to sound very reasonable! ❄️➡️🌞