PostgreSQL Backup and Restore Using AWS S3 or MinIO
This guide explains how to manage PostgreSQL cluster backups and restores using AWS S3 or MinIO. It applies only to the Connect-specific connect-postgresql
cluster and not to the postgres-db-pg-cluster
deployed with Foundation.
Prerequisites
-
MinIO is deployed and running — see the installation configuration in the Foundation MinIO Operator and Foundation MinIO Tenant.
-
Kubernetes secret for S3 access and TLS certificates:
-
If using AWS S3 → create a secret named
aws-secret
containing the access credentials. -
If using MinIO → enable the code block below to copy the MinIO credentials, TLS certificates, and CA secrets from the
foundation-zero-trust
namespace into thefoundation-default-env
namespace whereconnect-postgresql
is deployed.secretManager: ## Enable SecretManager enabled: true ## Set external secret manager/provider (only required if secrets are stored in an external provider) secretProvider: "" ## Secrets. Entries can be defined as yaml or stringified yaml, helm templating is supported. ## Map key is used for id on merges and as documentation. To remove existing entries, set key to null. ## Each entry configures secret to be managed/copied to Release namespace. ## Source secret is defined in .name and .namespace. ## Destination secret is defined in .template. ## Reference to source secret key must be in .template.value field. ## Value field supports templating, managed by the operator (must be escaped to avoid conflicts with helm templates). secrets: minio-accesskey: name: minio1-secret namespace: '{{ .Values.global.foundation.zeroTrustNamespace }}' template: key: accesskey name: '{{ include "connect-postgresql.fullname" . }}-minio1-creds' namespace: '{{ .Release.Namespace }}' value: '{{`{{ index . "accesskey" }}`}}' minio-secretkey: name: minio1-secret namespace: '{{ .Values.global.foundation.zeroTrustNamespace }}' template: key: secretkey name: '{{ include "connect-postgresql.fullname" . }}-minio1-creds' namespace: '{{ .Release.Namespace }}' value: '{{`{{ index . "secretkey" }}`}}' minio-ca: name: minio1-tls namespace: '{{ .Values.global.foundation.zeroTrustNamespace }}' template: key: ca.crt name: '{{ include "connect-postgresql.fullname" . }}-minio1-tls' namespace: '{{ .Release.Namespace }}' value: '{{`{{ index . "ca.crt" }}`}}' minio-tls-key: name: minio1-tls namespace: '{{ .Values.global.foundation.zeroTrustNamespace }}' template: key: tls.key name: '{{ include "connect-postgresql.fullname" . }}-minio1-tls' namespace: '{{ .Release.Namespace }}' value: '{{`{{ index . "tls.key" }}`}}' minio-tls-crt: name: minio1-tls namespace: '{{ .Values.global.foundation.zeroTrustNamespace }}' template: key: tls.crt name: '{{ include "connect-postgresql.fullname" . }}-minio1-tls' namespace: '{{ .Release.Namespace }}' value: '{{`{{ index . "tls.crt" }}`}}'
-
Backup PostgreSQL Data
S3 Common Configuration
The following configuration applies to both backup and restore. All parameters must be set exactly as specified for S3/MinIO connectivity to work correctly.
-
s3CommonConfig.endpoint
: S3/MinIO endpoint URL. -
s3CommonConfig.region
: S3/MinIO region. -
s3CommonConfig.bucket
: Target S3/MinIO bucket for backups. -
s3CommonConfig.auth.accessKey
: Access key for authentication. -
s3CommonConfig.auth.secretKey
: Secret key for authentication. -
s3CommonConfig.externalCa.enabled
: Set totrue
if using a custom CA certificate. -
s3CommonConfig.externalCa.cert
: Certificate file name (e.g.,tls.crt
). -
s3CommonConfig.auth.secretName
ands3CommonConfig.externalCa.secretName
: Required if you provide your own Kubernetes secret (instead of relying on the SecretManager-generated one).
s3CommonConfig:
endpoint: "https://minio.foundation-cluster-zerotrust:443" # S3/MinIO URL
region: "us-east-1" # Region name
bucket: "foundation-pf" # Bucket name
auth:
# secretName: minio1-creds # Kubernetes secret containing the TLS certificate. Leave this commented out if you are using the SecretManager block as specified in the prerequisites.
accessKey: accesskey # Access key
secretKey: secretkey # Secret key
externalCa:
enabled: true # Enable custom CA for TLS
# secretName: minio1-tls # Kubernetes secret containing the TLS certificate. Leave this commented out if you are using the SecretManager block as specified in the prerequisites.
cert: tls.crt # Certificate file name
S3 Backup Configuration
The following configuration is required to enable backups for a PostgreSQL cluster:
-
s3BackupConfig.enabled
: Set to true to enable backups. -
s3BackupConfig.walgBackup
: Set to true to use WAL-G for backups. -
s3BackupConfig.schedule
: Define the backup schedule as a cron expression (configure as per requirement). -
s3BackupConfig.prefix
: Specify the S3 bucket path where backups are stored. This value must be unique for each fresh deployment. -
s3BackupConfig.forceAwsStyle
: Set to true when using MinIO. -
s3BackupConfig.retention
: Define the number of days to retain backups (configure as per requirement).
s3BackupConfig:
enabled: true ## Set to false to disable WAL archiving for this cluster.
timeoutSeconds: 1800 ## postgresql.conf 'archive_timeout' value: https://www.postgresql.org/docs/current/runtime-config-wal.html
walgBackup: true ## Use WAL-G for backups.
schedule: "*/5 * * * *" ## patroni 'BACKUP_SCHEDULE' value. See: https://github.com/zalando/spilo/blob/2.1-p6/ENVIRONMENT.rst
prefix: '{{ printf "s3://%s/pg-backup/postgresql/backups/%s/initial" .Values.s3.bucket ( include "connect-postgresql.fullname" . ) }}' ## S3 path to the cluster backup. Use an absolute value or a Helm template. The default Helm template generates the next: 's3://foundation-pf/postgresql/backups/postgres-db-connect-postgresql/initial'
forceAwsStyle: "true" ## Required only for S3 Minio
retention: "5" ## Number of days to retain backups. See: https://github.com/zalando/spilo/issues/1066
Add the S3 common configuration and backup configuration to the values.yaml
file, then redeploy the connect-postgresql
release to enable backups.
After applying the backup configuration, verify that backups are stored correctly in S3/MinIO.
-
MinIO Console UI: You can view the S3/MinIO console under the specified bucket name. You should see both the base backups (in
basebackups_005
) and WAL segments (inwal_005
). -
Optional: Verify via WAL-G commands (inside a PostgreSQL pod):
# List available base backups
kubectl -n foundation-env-default exec -it connect-postgresql-0 -- wal-g backup-list
Defaulted container "postgres" out of: postgres, exporter
INFO: 2025/09/11 18:39:14.114912 List backups from storages: [default]
backup_name modified wal_file_name storage_name
base_000000040000000000000028 2025-09-11T18:22:20Z 000000040000000000000028 default
base_00000004000000000000002A 2025-09-11T18:25:02Z 00000004000000000000002A default
base_00000004000000000000002C 2025-09-11T18:30:02Z 00000004000000000000002C default
base_00000004000000000000002E 2025-09-11T18:35:02Z 00000004000000000000002E default
Restore PostgreSQL Procedure
There are two restore options: clone-based restore and in-place restore.
Clone-based Restore Procedure
Clone-based restore creates a new PostgreSQL cluster (a clone) and restores the backed-up data into it. It is deployed as a separate Helm release.
It is recommended to always test restores using a clone before attempting an in-place restore. |
There are two clone-based restore modes:
-
Clone with WAL-G (Point-In-Time-Recovery):
-
Restores from S3/MinIO backups (base backup + WAL files).
-
Does not require the original cluster to be running.
-
Supports Point-In-Time Recovery (PITR) to restore up to a specific timestamp.
-
-
Clone with Basebackup:
-
Streams data directly from the running primary PostgreSQL cluster using
pg_basebackup
. -
Requires that the original cluster is up and running.
-
Example Configuration for Clone-based Restore
You need the following configuration for clone-based restore.
-
s3BackupConfig.enabled
: Set to false to avoid pushing backups from the restored clone. -
s3RestoreConfig.enabled
: Set to true. -
s3RestoreConfig.walgRestore
: Set to true to enable WAL-G based restore. -
s3RestoreConfig.prefix
: Defines the exact S3 path to the backup location. -
s3RestoreConfig.forceAwsStyle
: Set to true when using MinIO. -
s3RestoreConfig.cluster
: Specifies the cluster name whose S3 backup will bootstrap the clone. -
s3CommonConfig
: (Required) Configure this. -
Optional:
s3RestoreConfig.timestamp
: Specify the PITR timestamp if you want to perform a Clone with WAL-G (PITR). Omitting this parameter will perform a clone using the base backup instead.
File: values.yaml
replicaCount: 1
fullnameOverride: connect-postgresql-clone
s3BackupConfig:
enabled: false
s3RestoreConfig:
enabled: true
walgRestore: true
prefix: '{{ printf "s3://%s/pg-backup/postgresql/backups/%s/initial" .Values.s3CommonConfig.bucket .Values.s3RestoreConfig.cluster }}'
forceAwsStyle: "true" ## Required 'true' only for S3 Minio
cluster: "connect-postgresql" # Name of the postgreSQL cluster to clone from
timestamp: "2025-09-11T18:35:02+00:00" # PITR target time.
s3CommonConfig:
endpoint: "https://minio.foundation-cluster-zerotrust:443" # S3/MinIO URL
region: "us-east-1" # Region name
bucket: "foundation-pf" # Bucket name
auth:
secretName: minio1-creds # Kubernetes secret containing credentials
accessKey: accesskey # Access key
secretKey: secretkey # Secret key
externalCa:
enabled: true # Enable custom CA for TLS
secretName: minio1-tls # Kubernetes secret containing TLS cert
cert: tls.crt # Certificate file name
To perform a clone-based restore, deploy a new PostgreSQL cluster with the restore configuration applied, as shown above.
Deploy the clone using Helm:
helm install -n foundation-env-default connect-postgresql-clone connect-helm/connect-postgresql -f values.yaml
Verify the restored data:
-
Once the deployment is complete, a new PostgreSQL cluster
connect-postgresql-clone
will be created. -
Verify that the restored cluster contains all the data from the backup. For testing purposes, you can create a test database in the original cluster and then confirm its presence and contents in the restored cluster.
Disable restore after verification:
Once verification is complete, update your values.yaml
to disable the restore via the s3RestoreConfig
block and redeploy the Helm chart.
helm upgrade -n foundation-env-default connect-postgresql-clone connect-helm/connect-postgresql -f values.yaml
In-place Restore Procedure
The in-place restore process restores a PostgreSQL cluster to its original state using the same cluster name. This is typically performed after a disaster (e.g., data loss or corruption).
Important: Before performing an in-place restore, first test your recovery using the clone-based restore approach. This allows you to validate the timestamp (PITR) and ensure that the data is restored correctly before overwriting the original cluster.
Example Configuration of In-place Restore
You need the following configuration for in-place restore.
-
s3BackupConfig.enabled
: Set to false to avoid pushing backups from the restored clone. -
s3RestoreConfig.enabled
: Set to true. -
s3RestoreConfig.walgRestore
: Set to true to enable WAL-G based restore. -
s3RestoreConfig.prefix
: Defines the exact S3 path to the backup location. -
s3RestoreConfig.forceAwsStyle
: Set to true when using MinIO. -
s3RestoreConfig.cluster
: Specifies the cluster name whose S3 backup will bootstrap the clone. -
s3RestoreConfig.timestamp
: Specifies the PITR timestamp (the point in time to which you want to restore). -
s3CommonConfig
: (Required) Configure this.Below is the configuration required for an in-place restore:
s3BackupConfig:
enabled: false
s3RestoreConfig:
enabled: true
walgRestore: true
prefix: '{{ printf "s3://%s/pg-backup/postgresql/backups/%s/initial" .Values.s3CommonConfig.bucket .Values.s3RestoreConfig.cluster }}'
forceAwsStyle: "true"
cluster: "connect-postgresql"
timestamp: "2025-09-11T18:35:02+00:00"
Before restore, delete the original cluster:
kubectl delete postgresql connect-postgresql -n foundation-env-default
# or
helm delete connect-postgresql -n foundation-env-default
After deletion, verify that the persistent volumes (PVs) have been removed. Then, redeploy the chart using the restore configuration:
s3BackupConfig:
enabled: false
s3RestoreConfig:
enabled: true
walgRestore: true
prefix: '{{ printf "s3://%s/pg-backup/postgresql/backups/%s/initial" .Values.s3CommonConfig.bucket .Values.s3RestoreConfig.cluster }}'
forceAwsStyle: "true" ## Required 'true' only for S3 MinIO
cluster: "connect-postgresql" # Name of the postgreSQL cluster to clone from
timestamp: "2025-09-11T18:35:02+00:00" # PITR target time.
Verify the restored data:
After redeployment, the cluster connect-postgresql
will be deployed, and data restored from the backup.
-
Verify that the restored cluster contains all the data from the backup. For testing purposes, you can create a test database in the original cluster and then confirm its presence and contents in the restored cluster.
If the cluster is configured with 3 replicas, during the restore process only one pod will become fully available (2/2 ), while the remaining pods may remain in a 1/2 state. Perform your verification on the pod that shows 2/2 .
|
Disable restore after verification:
Once verification is complete, update your values.yaml
to disable restore via the s3RestoreConfig
block and redeploy the Helm chart.
helm upgrade -n foundation-env-default connect-postgresql connect-helm/connect-postgresql -f values.yaml
kubectl delete pods -n foundation-env-default -l cluster-name=connect-postgresql
After disabling the restore via the s3RestoreConfig block, all pods should eventually reach a 2/2 state, unlike during the restore process, when only one pod is fully running.
|
Remember to re-enable backup via the s3BackupConfig
block in values.yaml
if they were disabled during the restore process.
Fetching PITR Timestamp
To perform a PITR, you need the timestamp of the WAL file from which the restore should start. Follow these steps:
-
From the S3/MinIO Console:
-
Connect to your S3/MinIO bucket.
-
Navigate to the
wal_005
(or equivalent WAL) directory. -
Identify the creation timestamp of the WAL file from which the PITR should start.
-
Convert the timestamp to the following format:
YYYY-MM-DDTHH:MM:SS±HH:MM
.
-