<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Alauddin Al Azad]]></title><description><![CDATA[Experienced software engineer with a passion for developing innovative programs that expedite the efficiency and effectiveness of organizational success. Well-v]]></description><link>https://helloazad.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 17:32:44 GMT</lastBuildDate><atom:link href="https://helloazad.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Scaling PostgreSQL Cloud SQL Connections w/ PgBouncer & Kubernetes]]></title><description><![CDATA[We have an ag tech service running on GCP and we have a microservice that we do maintain consisting of Cloud SQL, Cloud Run, Cloud Functions. When we are developing and scaling our services, from the beginning we have been facing issues with Cloud SQ...]]></description><link>https://helloazad.com/scaling-postgresql-cloud-sql-connections-w-pgbouncer-kubernetes</link><guid isPermaLink="true">https://helloazad.com/scaling-postgresql-cloud-sql-connections-w-pgbouncer-kubernetes</guid><category><![CDATA[PgBouncer]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[GCP]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Alauddin Al Azad]]></dc:creator><pubDate>Sun, 23 Oct 2022 20:15:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1666552198830/WcfO3xzvT.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We have an <a target="_blank" href="https://portal.ellingson.app/">ag tech service</a> running on GCP and we have a microservice that we do maintain consisting of Cloud SQL, Cloud Run, Cloud Functions. When we are developing and scaling our services, from the beginning we have been facing issues with Cloud SQL, mostly connection limitations. We have one Cloud SQL instance and as different services interact with DB, we are looking for a cheap solution that is scalable and which will act as a global pooler. </p>
<p>From that point, I have found an exciting solution that I have read in this  3 part article from <a target="_blank" href="https://medium.com/futuretech-industries">FutureTech Industries</a> about using Helm, Kubernetes, PgBouncer, and CloudSQL to drastically increase the connections a Postgresql DB can handle. Although it was very informative, as I was new to Kubernetes and not extremely versed in Helm and Kubernetes. So I chose a simpler solution with only Kubernetes.</p>
<h3 id="heading-so-what-is-pgbouncer-actually">So, what is PgBouncer actually?</h3>
<p>PgBouncer is a lightweight connection pool manager for Greenplum and PostgreSQL. PgBouncer maintains a pool for connections for each database and user combination. PgBouncer either creates a new database connection for a client or reuses an existing connection for the same user and database.
Source: https://www.pgbouncer.org/</p>
<h3 id="heading-setup-cloud-sql">Setup Cloud SQL</h3>
<ol>
<li>You will need a Cloud SQL instance (postgresql). This is easy to create as gcp has tons of informations regarding this.</li>
<li>Creating Cloud SQL Instance (Be sure to place it in the same region as your Kubernetes Cluster!)</li>
<li>Create a DB User for PgBouncer</li>
</ol>
<h2 id="heading-create-a-kubernetes-cluster">Create a kubernetes cluster</h2>
<p>Go to kubernetes engine and create and select "Autopilot: Google manages your cluster (Recommended)
" and you are good to go as Autopilot mode will take care of everything.</p>
<h2 id="heading-connecting-to-kubernetes-cluster-that-you-have-created">Connecting to Kubernetes Cluster that you have created</h2>
<pre><code>gcloud container clusters get-credentials YOUR_CLUSTER_NAME --region us-central1
</code></pre><p>Now you can access your kubernetes cluster with kubectl from localhost.</p>
<h2 id="heading-lets-get-started">Let's get started!</h2>
<ul>
<li>Create namespace <pre><code>kubectl create namespace pgb-namespace
</code></pre></li>
<li><p>Set namespace </p>
<pre><code>kubectl config set-context --current --namespace=pgb-namespace
</code></pre></li>
<li><p>Storing service account json file as secret. Place your json in root folder, rename your service account as  <strong>postgres-sql-credential.json</strong> and run this command.</p>
</li>
</ul>
<pre><code>kubectl create secret generic cloudsql-instance-credentials \
   --<span class="hljs-keyword">from</span>-file=credentials.json=postgres-sql-credential.json
</code></pre><ul>
<li>Create <strong>pgbouncer.ini</strong> file and paste the content there.</li>
</ul>
<pre><code>[databases]
* = host=localhost port=<span class="hljs-number">5432</span> user=postgres password=YOUR_PASSWORD

[pgbouncer]
listen_port=<span class="hljs-number">6432</span>
listen_addr=<span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>
auth_file=<span class="hljs-regexp">/opt/</span>bitnami/pgbouncer/conf/userlist.txt
auth_type=md5
pidfile=<span class="hljs-regexp">/opt/</span>bitnami/pgbouncer/tmp/pgbouncer.pid
logfile=<span class="hljs-regexp">/opt/</span>bitnami/pgbouncer/logs/pgbouncer.log
admin_users=postgres
client_tls_sslmode=disable
server_tls_sslmode=disable
pool_mode=transaction
server_reset_query = DISCARD ALL
ignore_startup_parameters = extra_float_digits
application_name_add_host = <span class="hljs-number">1</span>
max_client_conn = <span class="hljs-number">10000</span>
autodb_idle_timeout = <span class="hljs-number">3600</span>
default_pool_size = <span class="hljs-number">20</span>
max_db_connections = <span class="hljs-number">80</span>
max_user_connections = <span class="hljs-number">80</span>
</code></pre><blockquote>
<p>Explanation: </p>
<p>Suppose your database has maximum 100 connections. Leaving 20% connection for super user, so set max_db_connections = 80 and max_user_connections = 80</p>
<p>max_client_conn = 10000 , pgbouncer can handle 10000 incoming connections!
default_pool_size = 20 , suppose you have 4 databases in cloud sql , so 4 * 20 = max_db_connection</p>
</blockquote>
<ul>
<li><p>Create <strong>userlist.txt</strong> file and paste this content,</p>
<pre><code>md545f2603610af569b6155c45067268c6b
</code></pre><blockquote>
<p>Explanation: This md5 is made from username and password combination.
For this md5, <strong>username</strong>: <strong>admin</strong> and <strong>password</strong>: <strong>1234</strong>.
 <a target="_blank" href="https://www.pgbouncer.org/config.html#authentication-file-format">Follow this doc to make your own</a></p>
</blockquote>
</li>
<li><p>Now store this newly created pgbouncer.ini and userlist.txt in secret. Make sure to place both pgbouncer.ini and userlist.txt file in root folder.</p>
<pre><code>kubectl create secret generic pgb-configuration \
 --<span class="hljs-keyword">from</span>-file=pgbouncer.ini --<span class="hljs-keyword">from</span>-file=userlist.txt
</code></pre></li>
<li>Storing your db user and pass as secret.<pre><code>kubectl create secret generic db-credentials \
 --<span class="hljs-keyword">from</span>-literal=username=postgres --<span class="hljs-keyword">from</span>-literal=password=YOUR_DB_PASS
</code></pre></li>
<li>Create <strong>kube_pgb_proxy.yaml</strong> file and paste this content</li>
</ul>
<pre><code>apiVersion: apps/v1
<span class="hljs-attr">kind</span>: Deployment
<span class="hljs-attr">metadata</span>:
  name: pgproxy
  <span class="hljs-attr">namespace</span>: pgb-namespace
<span class="hljs-attr">spec</span>:
  replicas: <span class="hljs-number">1</span>
  <span class="hljs-attr">selector</span>:
    matchLabels:
      app: pgproxy
  <span class="hljs-attr">revisionHistoryLimit</span>: <span class="hljs-number">1</span>
  <span class="hljs-attr">strategy</span>:
    type: RollingUpdate
  <span class="hljs-attr">template</span>:
    metadata:
      labels:
        app: pgproxy
        <span class="hljs-attr">tier</span>: backend
    <span class="hljs-attr">spec</span>:
      securityContext:
        runAsUser: <span class="hljs-number">0</span>
        <span class="hljs-attr">runAsNonRoot</span>: <span class="hljs-literal">false</span>
      <span class="hljs-attr">containers</span>:
        - name: cloudsql-proxy
          <span class="hljs-attr">resources</span>:
            requests:
              memory: <span class="hljs-string">"500Mi"</span>
              <span class="hljs-attr">cpu</span>: <span class="hljs-string">"500m"</span>
              ephemeral-storage: <span class="hljs-string">"1Gi"</span>
            <span class="hljs-attr">limits</span>:
              memory: <span class="hljs-string">"1000Mi"</span>
              <span class="hljs-attr">cpu</span>: <span class="hljs-string">"1000m"</span>
              ephemeral-storage: <span class="hljs-string">"1Gi"</span>
          <span class="hljs-attr">image</span>: gcr.io/cloudsql-docker/gce-proxy:<span class="hljs-number">1.11</span>
          <span class="hljs-attr">command</span>:
            [
              <span class="hljs-string">"/cloud_sql_proxy"</span>,
              <span class="hljs-string">"--dir=/cloudsql"</span>,
              <span class="hljs-string">"-instances=**YOUR_INSTANCE_NAME_STRING**=tcp:5432"</span>,
              <span class="hljs-string">"-credential_file=/secrets/cloudsql/credentials.json"</span>,
            ]
          <span class="hljs-attr">volumeMounts</span>:
            - name: cloudsql-instance-credentials
              <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/secrets/</span>cloudsql
              <span class="hljs-attr">readOnly</span>: <span class="hljs-literal">true</span>
            - name: cloudsql
              <span class="hljs-attr">mountPath</span>: /cloudsql
        - name: pgproxy
          <span class="hljs-attr">env</span>:
            - name: POSTGRESQL_HOST
              <span class="hljs-attr">value</span>: localhost
            - name: POSTGRESQL_PASSWORD
              <span class="hljs-attr">valueFrom</span>:
                secretKeyRef:
                  name: db-credentials
                  <span class="hljs-attr">key</span>: password
            - name: POSTGRESQL_USERNAME
              <span class="hljs-attr">valueFrom</span>:
                secretKeyRef:
                  name: db-credentials
                  <span class="hljs-attr">key</span>: username
          <span class="hljs-attr">volumeMounts</span>:
            - name: pgb-configuration
              <span class="hljs-attr">mountPath</span>: <span class="hljs-regexp">/bitnami/</span>pgbouncer/conf
              <span class="hljs-attr">readOnly</span>: <span class="hljs-literal">true</span>
          <span class="hljs-attr">image</span>: bitnami/pgbouncer:latest
          <span class="hljs-attr">lifecycle</span>:
            preStop:
              exec:
                command:
                  - <span class="hljs-regexp">/bin/</span>sh
                  - -c
                  - killall -INT pgbouncer &amp;&amp; sleep <span class="hljs-number">120</span>
          <span class="hljs-attr">ports</span>:
            - containerPort: <span class="hljs-number">6432</span>
      <span class="hljs-attr">volumes</span>:
        - name: cloudsql-instance-credentials
          <span class="hljs-attr">secret</span>:
            secretName: cloudsql-instance-credentials
        - name: pgb-configuration
          <span class="hljs-attr">secret</span>:
            secretName: pgb-configuration
        - name: cloudsql
          <span class="hljs-attr">emptyDir</span>:

---
apiVersion: v1
<span class="hljs-attr">kind</span>: Service
<span class="hljs-attr">metadata</span>:
  name: pgproxy
  <span class="hljs-attr">namespace</span>: pgb-namespace
  <span class="hljs-attr">annotations</span>:
    cloud.google.com/load-balancer-type: <span class="hljs-string">"Internal"</span>
<span class="hljs-attr">spec</span>:
  type: LoadBalancer
  <span class="hljs-attr">selector</span>:
    app: pgproxy
  <span class="hljs-attr">ports</span>:
    - port: <span class="hljs-number">6432</span>
      <span class="hljs-attr">targetPort</span>: <span class="hljs-number">6432</span>
</code></pre><blockquote>
<p>Leave all the values as it is, except the db string. You will find this in GCP cloud sql </p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666551563963/uNOw0DgvN.png" alt="Screenshot 2022-10-24 at 12.58.43 AM.png" /></p>
<ul>
<li>Now apply the created yaml file.</li>
</ul>
<pre><code>kubectl apply -f kube_pgb_proxy.yaml
</code></pre><blockquote>
<p>If everything works well, you will find 1 workloads and 1 service in your kubernetes engine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666551792267/0SwlygNXQ.png" alt="Screenshot 2022-10-24 at 1.00.47 AM.png" />
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666551801958/BjroQvyBn.png" alt="Screenshot 2022-10-24 at 1.01.06 AM.png" /></p>
<p>Congrats! Now you have your pgbouncer server running with the </p>
<p>db_host: 10.148.0.34</p>
<p>db_port: 6432</p>
<p>db_username: admin</p>
<p>db_pass: 1234</p>
</blockquote>
<ul>
<li>Now you have a running pgbouncer cluster, lets create horizontal pod autoscaler. Create <strong>hpa.yaml</strong> file and paste this content there.</li>
</ul>
<pre><code>apiVersion: autoscaling/v1
<span class="hljs-attr">kind</span>: HorizontalPodAutoscaler
<span class="hljs-attr">metadata</span>:
  name: pgb-hpa
  <span class="hljs-attr">namespace</span>: pgb-namespace
<span class="hljs-attr">spec</span>:
  scaleTargetRef:
    apiVersion: apps/v1
    <span class="hljs-attr">kind</span>: Deployment
    <span class="hljs-attr">name</span>: pgproxy
  <span class="hljs-attr">minReplicas</span>: <span class="hljs-number">1</span>
  <span class="hljs-attr">maxReplicas</span>: <span class="hljs-number">10</span>
  <span class="hljs-attr">targetCPUUtilizationPercentage</span>: <span class="hljs-number">75</span>
</code></pre><blockquote>
<p>Explanation:</p>
<p>Here minReplicas 1 and maxReplicas 10 and targetCPUUtilizationPercentage 75. So when your pod will exceeds 75% cpu utilization, it will create another pod to meet the on demand connections.</p>
</blockquote>
<ul>
<li>Now execute this command to apply.</li>
</ul>
<pre><code>kubectl apply -f hpa.yaml
</code></pre><h3 id="heading-conclusion">Conclusion</h3>
<p>I hope this article help you!</p>
]]></content:encoded></item></channel></rss>