Skip to main content
Version: v1.1

Disaster Recovery for PGVector Langchain Application with Percona PostgreSQL

Prerequisites

  • AWS Cli
  • yq
  • kubectl
  • Nova Control Plane installed with 3 workload clusters connected

The paths to files will be defined relatively to try-nova root directory.

We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.

export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

Export these additional environment variables if you installed Nova using the tarball.

export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

Alternatively export these environment variables if you installed Nova using setup scripts provided in the try-nova repository.

export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2

Setting Up S3 Access for Backups

Our first step involves setting up an S3 bucket for backups. Follow these commands to create a bucket and configure access:

  1. Create S3 bucket
REGION=eu-west-2

aws s3api create-bucket \
--bucket nova-postgresql-backup \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION
  1. Create IAM Policy:
aws iam create-policy \
--policy-name read-write-list-s3-nova-postgresql-backup \
--policy-document file://examples/pgvector-disaster-recovery/s3-policy.json
  1. List Policies to Verify:
aws iam list-policies --query 'Policies[?PolicyName==`read-write-list-s3-nova-postgresql-backup`].Arn' --output text
  1. Create User and Attach Policy:
aws iam create-user --no-cli-pager --user-name s3-backup-service-account

POLICYARN=$(aws iam list-policies --query 'Policies[?PolicyName==`read-write-list-s3-nova-postgresql-backup`].Arn' --output text)
aws iam attach-user-policy \
--policy-arn $POLICYARN \
--user-name s3-backup-service-account

aws iam create-access-key --user-name s3-backup-service-account

NOTE Before rerunning this tutorial make sure that used bucket is empty.

{
"AccessKey": {
"UserName": "s3-backup-service-account",
"AccessKeyId": "AKIAXXXX",
"Status": "Active",
"SecretAccessKey": "VaC0xxxx",
"CreateDate": "2023-12-13T13:59:34+00:00"
}
}

Note down the AccessKeyId and SecretAccessKey values and substitute in examples/pgvector-disaster-recovery/template-s3-bucket-access-key-secret.txt

base64 -i examples/pgvector-disaster-recovery/template-s3-bucket-access-key-secret.txt

Place output in examples/pgvector-disaster-recovery/s3-access-secret.yaml

Uploading pgvector Postgres extension

In order to use pgvector we need to place it in s3 so that Percona operator can find it and install it. In this example we'll use the same s3 bucket as we do for backups, just for simplicity. To do that, simply run

aws s3 cp --profile my-aws-profile --endpoint-url http://172.18.255.240:9000 examples/pgvector-disaster-recovery/pgvector-pg15-0.5.1.tar.gz s3://nova-postgresql-backup/pgvector-pg15-0.5.1.tar.gz

Installing Percona PostgreSQL Operator

Now let's install the Percona PostgreSQL Operator and set up the clusters:

  1. Create Schedule Policies: Below policies will schedule PostgreSQL Operator to cluster 1 and 2, primary PostgreSQL cluster to 1 and standby to 2. HaProxy will be also scheduled to cluster 2.
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f examples/pgvector-disaster-recovery/schedule-policies.yaml
  1. Clone Percona PostgreSQL Repository:
REPO_DIR="percona-postgresql-operator"
REPO_URL="https://github.com/percona/percona-postgresql-operator"
REPO_BRANCH="v2.3.0"

if [ -d "$REPO_DIR" ]; then
rm -rf $REPO_DIR
fi

git clone -b $REPO_BRANCH $REPO_URL
  1. Proceed with installing Percona PostgreSQL Operator
echo "Creating operator namespace"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create ns pgvector-operator --dry-run=client -o yaml | yq e ".metadata.labels.psql-cluster = \"all\"" | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f -

echo "Installing operator to cluster all"
cat percona-postgresql-operator/deploy/bundle.yaml | python3 add_labels.py namespace psql-cluster all | python3 add_labels.py cluster psql-cluster all | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator create -f -

When running on AWS use:

# echo "Settting up s3 access"
cat examples/pgvector-disaster-recovery/s3-access-secret.yaml | python3 add_labels.py namespace psql-cluster all | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator create -f -

and when running locally with Minio:

# echo "Settting up s3 access"
cat examples/pgvector-disaster-recovery/s3-access-secret-minio.yaml | python3 add_labels.py namespace psql-cluster all | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator create -f -
  1. Configure 2 PostgreSQL clusters
cat examples/pgvector-disaster-recovery/cluster_1_cr.yaml | python3 add_labels.py namespace psql-cluster cluster-1 | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator create -f -
cat examples/pgvector-disaster-recovery/cluster_2_cr.yaml | python3 add_labels.py namespace psql-cluster cluster-2 | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator create -f -
  1. Setup loadbalancer in front of our databases. LoadBalancer is needed to keep supporting client connection after the recovery switch is made. For our example we'll use HAProxy. We'll need address of our active PostgreSQL cluster. To get it, you can run:
kubectl wait perconapgcluster/cluster1 -n psql-operator --context=${K8S_CLUSTER_CONTEXT_1} '--for=jsonpath={.status.host}' --timeout=300s
DB_HOST=$(kubectl --context=${K8S_CLUSTER_CONTEXT_1} get perconapgcluster/cluster1 -n psql-operator -o jsonpath='{.status.host}')
envsubst < "examples/pgvector-disaster-recovery/haproxy.cfg" > "./haproxy.cfg"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create configmap haproxy-config --from-file=haproxy.cfg=./haproxy.cfg --dry-run=client -o yaml | python3 add_labels.py namespace cluster cluster-ha-proxy | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f -

And then apply actual HAProxy deployment and service

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f examples/pgvector-disaster-recovery/haproxy.yaml

Setup RecoveryPlan

apiVersion: recovery.elotl.co/v1alpha1
kind: RecoveryPlan
metadata:
name: psql-primary-failover-plan
spec:
alertLabels:
app: percona-postgresql-cluster-1
steps:
- type: patch # set cluster 1 to standby
patch:
apiVersion: "pgv2.percona.com/v2
resource: "perconapgclusters"
namespace: "pgvector-operator"
name: "cluster1"
override:
fieldPath: "spec.standby.enabled"
value:
raw: true
patchType: "application/merge-patch+json"
- type: patch # set cluster 2 as new primery
patch:
apiVersion: "pgv2.percona.com/v2"
resource: "perconapgclusters"
namespace: "pgvector-operator"
name: "cluster2"
override:
fieldPath: "spec.standby.enabled"
value:
raw: false
patchType: "application/merge-patch+json"
- type: readField # read cluster 2 host
readField:
apiVersion: "pgv2.percona.com/v2"
resource: "perconapgclusters"
namespace: "pgvector-operator"
name: "cluster2"
fieldPath: "status.host"
outputKey: "Cluster2IP"
- type: patch # update HAProxy to point to cluster 2
patch:
apiVersion: "v1"
resource: "configmaps"
namespace: "default"
name: "haproxy-config"
override:
fieldPath: "data"
value:
raw: {"haproxy.cfg": "defaults\n mode tcp\n timeout connect 5000ms\n timeout client 50000ms\n timeout server 50000ms\n\nfrontend fe_main\n bind *:5432\n default_backend be_db_2\n\nbackend be_db_2\n server db2 {{ .Values.Cluster2IP }}:5432 check"}
patchType: "application/merge-patch+json"

Let's run

Recovery plan will read the host of standby cluster, so we need to make sure it was assigned, before proceeding

kubectl wait perconapgclusters/cluster2 -n pgvector-operator --context=${NOVA_CONTROLPLANE_CONTEXT} '--for=jsonpath={.status.host}' --timeout=300s

Add recovery plan

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f examples/pgvector-disaster-recovery/recovery-plan.yaml

In production systems alerts will be sent to Nova through recovery webhook, by some metrics service like Prometheus with Alertmanager. For ease of this tutorial we will simulate receiving an alert by adding it to Nova. When the alert is added Nova looks for recovery plan by matching alert labels to recovery plan labels. Once it finds the recovery plan it executes it.

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f examples/pgvector-disaster-recovery/received-alert.yaml

Let's verify if recovery succeeded

Check if cluster 1 (in our tutorial we assume it fails) is set to standby.

kubectl wait perconapgclusters/cluster1 -n pgvector-operator --context=${NOVA_CONTROLPLANE_CONTEXT} '--for=jsonpath={.spec.standby.enabled}'=true --timeout=180s

Check if cluster 2 took over the role of primary - standby false.

kubectl wait perconapgclusters/cluster2 -n pgvector-operator --context=${NOVA_CONTROLPLANE_CONTEXT} '--for=jsonpath={.spec.standby.enabled}'=false --timeout=180s

Check if HAProxy is now connected to the new primary cluster - cluster 2.

kubectl get cm/haproxy-config --context=${NOVA_CONTROLPLANE_CONTEXT} -n default -o jsonpath='{.data.haproxy\.cfg}' | grep 'server db2'
server db2 172.18.255.240:5432 check

Cleanup

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/pgvector-disaster-recovery/received-alert.yaml
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/pgvector-disaster-recovery/recovery-plan.yaml
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/pgvector-disaster-recovery/haproxy.yaml
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create configmap haproxy-config --from-file=haproxy.cfg=examples/pgvector-disaster-recovery/haproxy.cfg --dry-run=client -o yaml | python3 add_labels.py namespace cluster cluster-ha-proxy | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f -
cat examples/pgvector-disaster-recovery/cluster_1_cr.yaml | python3 add_labels.py namespace psql-cluster cluster-1 | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator delete -f -
cat examples/pgvector-disaster-recovery/cluster_2_cr.yaml | python3 add_labels.py namespace psql-cluster cluster-2 | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator delete -f -
cat percona-postgresql-operator/deploy/bundle.yaml | python3 add_labels.py namespace psql-cluster all | python3 add_labels.py cluster psql-cluster all | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} --namespace pgvector-operator delete -f -
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create ns pgvector-operator --dry-run=client -o yaml | yq e ".metadata.labels.psql-cluster = \"all\"" | kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f -
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/pgvector-disaster-recovery/schedule-policies.yaml