Have any questions:

Toll free:9801887718Available 24/7

Email our experts:info@mantraideas.com

In: Cloud Infrastructure, DevOps

Protect your PostgreSQL data with automated, secure backups to S3 using Ansible – a comprehensive guide for database administrators and DevOps engineers.

Why PostgreSQL Database Backup Automation is Critical

Data is the lifeblood of modern applications, and PostgreSQL database backup is not just a best practice—it’s a necessity for business continuity. According to industry reports, companies that experience major data loss without adequate backup and restore PostgreSQL database procedures have a 43% chance of never reopening and a 51% chance of closing within two years.

How to backup PostgreSQL database effectively involves more than just occasional manual exports. Automation ensures:

  • Consistency: Regular backups without human intervention
  • Reliability: Reduced risk of human error in the backup process
  • Efficiency: Freeing up valuable engineering time
  • Compliance: Meeting regulatory requirements for data retention

In this comprehensive guide, we’ll implement a robust solution for how to backup PostgreSQL database automatically using Ansible to automate backups to S3-compatible storage.

Why S3 for PostgreSQL Backups?

Amazon S3 (and S3-compatible services like Hetzner) provides an ideal destination for your PostgreSQL database backup strategy because of:

  • Durability: 99.999999999% (11 nines) object durability
  • Scalability: Virtually unlimited storage capacity
  • Security: Fine-grained access controls and encryption options
  • Cost-effectiveness: Pay only for what you use with tiered pricing
  • Lifecycle management: Automatically transition objects to cheaper storage classes

Prerequisites

Before implementing our PostgreSQL backup database automation, ensure you have:

  1. A control machine with Ansible installed:
sudo apt update

sudo apt install ansible python3-pip

pip3 install boto3 psycopg2-binary

2. Ansible Vault for secret management:

sudo apt install ansible-vault

3. Access to an S3-compatible storage (AWS S3 or Hetzner)

4. SSH access to your PostgreSQL server

Project Structure for PostgreSQL Backup Automation

Organize your Ansible project as follows:

postgres-automation/
├── inventories/
│   └── production/
│       ├── hosts
│       └── group_vars/
│           └── all.yml
├── roles/
│   ├── postgres-install/
│   │   └── tasks/
│   │       └── main.yml
│   ├── postgres-users/
│   │   └── tasks/
│   │       └── main.yml
│   ├── postgres-databases/
│   │   └── tasks/
│   │       └── main.yml
│   ├── postgres-backup/
│   │   ├── tasks/
│   │   │   └── main.yml
│   │   └── templates/
│   │       └── s3_lifecycle.json.j2
│   └── postgres-restore/
│       └── tasks/
│           └── main.yml
├── playbooks/
│   ├── main.yml
│   ├── install_postgres.yml
│   ├── manage_users.yml
│   ├── manage_databases.yml
│   ├── backup.yml
│   └── restore.yml
└── secrets.yml

Setting Up Ansible Vault for Secure PostgreSQL Backup Credentials

Instead of HashiCorp Vault, we’ll use Ansible Vault to encrypt our secrets—a crucial step for securing your backup database PostgreSQL credentials.

Create an encrypted secrets file:

ansible-vault create secrets.yml

Add the following content (use your actual values):

# secrets.yml
postgres_root_password: "your_secure_password_here"
s3_access_key: "your_s3_access_key_here"
s3_secret_key: "your_s3_secret_key_here"
occs_api_rw_password: "occs_user_password_here"
occs_exporter_rw_password: "exporter_user_password_here"

Ansible Variables for PostgreSQL Backup Configuration

Create inventories/production/group_vars/all.yml with these configuration variables for your PostgreSQL database backup system:

# PostgreSQL Configuration
postgresql:
  listen_address: '*'
  version: 14
  hba_conf_path: /etc/postgresql/14/main/pg_hba.conf
  postgresql_conf_path: /etc/postgresql/14/main/postgresql.conf
  port: 5432
  root_user: "postgres"
  host: "127.0.0.1"

# Database Configuration
databases:
  - name: database1
    encoding: UTF8
    locale: en_US.UTF-8
  - name: database2
    encoding: UTF8
    locale: en_US.UTF-8

# Backup Configuration
database_backup_directory: /backups
database_restore_directory: /tmp/postgres-restore

# List of databases to back up - crucial for PostgreSQL backup all databases
database_backup:
  - database: database1
    backup_enabled: true
    cron_enabled: true
    cron_time: "0 0 * * *"  # Midnight daily
  - database: database2
    backup_enabled: true
    cron_enabled: true
    cron_time: "0 0 * * *"  # Midnight daily

# PostgreSQL Users Configuration
postgres_users:
  - name: user1
  - name: user2

# Database Access Management
postgres_access:
  - database: database1
    User: user1
    allow_remote_hosts:
      - 127.0.0.1
    db_privileges: "CONNECT"
    schema: public

  - database: database2
    User: user2
    allow_remote_hosts:
      - 127.0.0.1
    db_privileges: "CONNECT"
    schema: public

# Unwanted Users Removal
unwanted_users: ['']

# S3 Configuration for PostgreSQL backup database storage
s3:
  bucket: "<bucket_name>"
  region: "<region>"
  endpoint_url: "<endpoint_ur>"

Ansible Playbooks for PostgreSQL Backup and Restore

Main Playbook for Complete PostgreSQL Setup

Create playbooks/main.yml to run all tasks:

---
- name: Complete PostgreSQL setup with backup automation
  hosts: all
  become: yes
  vars_files:
    - ../secrets.yml
  roles:
    - role: ../roles/postgres-install
    - role: ../roles/postgres-users
    - role: ../roles/postgres-databases
    - role: ../roles/postgres-backup

PostgreSQL Installation Playbook

Create roles/postgres-install/tasks/main.yml for the initial PostgreSQL setup:

---
- name: Update packages
  apt:
    update_cache: yes

- name: Add PostgreSQL repository key
  apt_key:
    url: https://www.postgresql.org/media/keys/ACCC4CF8.asc
    state: present

- name: Add PostgreSQL APT repository
  apt_repository:
    repo: "deb http://apt.postgresql.org/pub/repos/apt {{ ansible_distribution_release }}-pgdg main"
    state: present

- name: Update apt cache
  apt:
    update_cache: yes

- name: Install required packages for PostgreSQL backup database command line
  apt:
    name:
      - python3-psycopg2
      - postgresql-client
      - "postgresql-{{ postgresql.version }}"
    state: present

- name: Ensure PostgreSQL service is started and enabled
  service:
    name: postgresql
    state: started
    enabled: yes

- name: Configure listen_addresses in postgresql.conf
  lineinfile:
    path: "{{ postgresql.postgresql_conf_path }}"
    regexp: "^#?listen_addresses =.*"
    line: "listen_addresses = '{{ postgresql.listen_address }}'"

- name: Set PostgreSQL root password
  become_user: postgres
  command:
    cmd: "psql -U postgres -c \"ALTER USER postgres PASSWORD '{{ postgres_root_password }}';\""

- name: Configure pg_hba.conf for postgres user (peer to md5)
  lineinfile:
    path: "{{ postgresql.hba_conf_path }}"
    regexp: '^local\s+all\s+postgres\s+peer'
    line: "local   all             postgres                                md5"
    state: present

- name: Configure pg_hba.conf for all users (peer to md5)
  lineinfile:
    path: "{{ postgresql.hba_conf_path }}"
    regexp: '^local\s+all\s+all\s+peer'
    line: "local   all             all                                     md5"
    state: present

- name: Restart PostgreSQL to apply configuration changes
  service:
    name: postgresql
    state: restarted

- name: Create .pgpass file for postgres user
  become_user: postgres
  template:
    src: "pgpass.j2"
    dest: "/var/lib/postgresql/.pgpass"
    owner: postgres
    group: postgres
    mode: '0600'

- name: Ensure PostgreSQL password authentication works
  become_user: postgres
  lineinfile:
    path: "/var/lib/postgresql/.bashrc"
    line: "export PGPASSFILE=/var/lib/postgresql/.pgpass"
    create: yes


Create roles/postgres-install/templates/pgpass.j2 for secure authentication:

{{ postgresql.host }}:{{ postgresql.port }}:*:{{ postgresql.root_user }}:{{ postgres_root_password }}

User Management Playbook

Create roles/postgres-users/tasks/main.yml to manage database users:

---
- name: Create or update PostgreSQL users
  community.postgresql.postgresql_user:
    name: "{{ item.name }}"
    password: "{{ lookup('vars', item.name + '_password') }}"
    state: present
    login_user: "{{ postgresql.root_user }}"
    login_password: "{{ postgres_root_password }}"
    login_host: "{{ postgresql.host }}"
  loop: "{{ postgres_users }}"
  when: postgres_users is defined

- name: Revoke access and drop unwanted PostgreSQL users
  community.postgresql.postgresql_user:
    name: "{{ item }}"
    state: absent
    login_user: "{{ postgresql.root_user }}"
    login_password: "{{ postgres_root_password }}"
    login_host: "{{ postgresql.host }}"
  loop: "{{ unwanted_users }}"
  when: unwanted_users is defined and unwanted_users | length > 0

Database Management Playbook

Create roles/postgres-databases/tasks/main.yml for database creation and management:

---
- name: Create PostgreSQL database
  community.postgresql.postgresql_db:
    name: "{{ item.name }}"
    encoding: "{{ item.encoding | default('UTF8') }}"
    lc_collate: "{{ item.locale | default('en_US.UTF-8') }}"
lc_ctype: "{{ item.locale | default('en_US.UTF-8') }}"
    state: present
    login_user: "{{ postgresql.root_user }}"
    login_password: "{{ postgres_root_password }}"
    login_host: "{{ postgresql.host }}"
  loop: "{{ databases }}"
  when: databases is defined

- name: Revoke all database privileges from users
  community.postgresql.postgresql_privs:
    db: "{{ item.database }}"
    role: "{{ item.user }}"
    type: database
    privs: ALL
    state: absent
    login_user: "{{ postgresql.root_user }}"
    login_password: "{{ postgres_root_password }}"
    login_host: "{{ postgresql.host }}"
  loop: "{{ postgres_access }}"
  when: postgres_access is defined

- name: Grant database-level privileges
  community.postgresql.postgresql_privs:
    db: "{{ item.database }}"
    role: "{{ item.user }}"
    type: database
    privs: "{{ item.db_privileges }}"
    state: present
    login_user: "{{ postgresql.root_user }}"
    login_password: "{{ postgres_root_password }}"
    login_host: "{{ postgresql.host }}"
  loop: "{{ postgres_access }}"
  when: postgres_access is defined and item.db_privileges is defined and item.db_privileges != ''

- name: Configure pg_hba.conf for remote connections
  lineinfile:
    path: "{{ postgresql.hba_conf_path }}"
    insertafter: EOF
    line: "host    {{ item.0.database }}   {{ item.0.user }}   {{ item.1 }}/32   md5"
    state: present
  loop: "{{ postgres_access | subelements('allow_remote_hosts') }}"

- name: Restart PostgreSQL to apply changes
  service:
    name: postgresql
    state: restarted

Backup Automation Playbook – The Core of PostgreSQL Database Backup

Create roles/postgres-backup/tasks/main.yml – this is where we implement the actual PostgreSQL backup database automation:

---

- name: Ensure backup directory exists
  file:
    path: "{{ database_backup_directory }}"
    state: directory
    mode: '0755'
    owner: postgres
    group: postgres

- name: Install required packages for PostgreSQL backup database command line
  package:
    name:
      - pigz  # Parallel compression for faster backups
      - unzip
      - curl
    state: present

- name: Download AWS CLI v2 for S3 operations
  get_url:
    url: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
    dest: /tmp/awscliv2.zip
    mode: '0644'

- name: Unzip AWS CLI v2
  unarchive:
    src: /tmp/awscliv2.zip
    dest: /tmp/
    remote_src: yes
- name: Install AWS CLI v2
  command: /tmp/aws/install --update
  args:
    creates: /usr/local/bin/aws

- name: Create AWS credentials directory
  file:
    path: /var/lib/postgresql/.aws
    state: directory
    owner: postgres
    group: postgres
    mode: '0700'

- name: Configure AWS CLI credentials for postgres user
  copy:
    dest: /var/lib/postgresql/.aws/credentials
    content: |
      [default]
      aws_access_key_id = {{ s3_access_key }}
      aws_secret_access_key = {{ s3_secret_key }}
    owner: postgres
    group: postgres
    mode: '0600'

- name: Configure AWS CLI config for postgres user
  copy:
    dest: /var/lib/postgresql/.aws/config
    content: |
      [default]
      region = {{ s3.region }}
    owner: postgres
    group: postgres
    mode: '0600'

- name: Create .pgpass file if not present
  copy:
    dest: /var/lib/postgresql/.pgpass
    content: |
      {{ postgresql.host }}:*:*:{{ postgresql.root_user }}:{{ postgres_root_password }}
    owner: postgres
 group: postgres
    mode: '0600'
    force: no

- name: Create backup log file
  file:
    path: /var/log/postgres_backups.log
    state: touch
    owner: postgres
    group: postgres
    mode: '0644'

- name: Perform PostgreSQL backup - How to backup PostgreSQL database command line
  become: true
  become_user: postgres
  vars:
    pg_version: "{{ postgresql.version | default('14') }}"
    pg_bin: "/usr/lib/postgresql/{{ pg_version }}/bin"
  shell: |
    BACKUP_FILE="{{ database_backup_directory }}/{{ item.database }}-{{ ansible_date_time.iso8601 }}.backup.gz"
    VERSION_FILE="${BACKUP_FILE%.gz}.version"
    # Record pg_dump version for restore - important for PostgreSQL database backup file extension
    {{ pg_bin }}/pg_dump --version > "$VERSION_FILE"
    # Create compressed backup using custom format for efficient PostgreSQL database backup
    {{ pg_bin }}/pg_dump -U {{ postgresql.root_user }}{% if postgresql.host|default('')|length > 0 %} -h {{ postgresql.host }}{% endif %} -d {{ item.database }} \
      -F custom -Z 0 --create --clean --if-exists \
      | pigz -p {{ ansible_processor_vcpus }} > "$BACKUP_FILE"
    echo "$BACKUP_FILE"
  environment:
    PGPASSWORD: "{{ postgres_root_password }}"
  register: backup_result
  when: item.backup_enabled | default(true)
  loop: "{{ database_backup }}"
  loop_control:
    label: "{{ item.database }}"
- name: Upload backup to S3 - PostgreSQL backup remote database to cloud
  become: true
  become_user: postgres
  shell: |
    aws s3 cp "{{ database_backup_directory }}/{{ item.database }}-{{ ansible_date_time.iso8601 }}.backup.gz" \
      s3://{{ s3.bucket }}/postgres-backups/{{ item.database }}/daily/{{ ansible_date_time.date }}/{{ item.database }}-{{ ansible_date_time.iso8601 }}.backup.gz \
      {% if s3.endpoint_url is defined %}--endpoint-url {{ s3.endpoint_url }}{% endif %} \
      --region {{ s3.region }}
  when: item.backup_enabled | default(true)
  loop: "{{ database_backup }}"
  loop_control:
    label: "{{ item.database }}"

- name: Remove old local backups (older than 3 days)
  become: true
  become_user: postgres
  find:
    paths: "{{ database_backup_directory }}"
    patterns: "*.backup.gz"
    age: 3d
  register: old_backups

- name: Delete old local backups
  become: true
  become_user: postgres
  file:
    path: "{{ item.path }}"
    state: absent
  loop: "{{ old_backups.files }}"
  when: old_backups.matched > 0

- name: Create backup script for cron - Script to backup PostgreSQL database automatically
  copy:
    dest: "/usr/local/bin/db_backup_{{ item.database }}.sh"
    mode: '0755'
    owner: postgres
    group: postgres
content: |
      #!/bin/bash
      set -e
      PG_BIN_PATH="/usr/lib/postgresql/{{ postgresql.version }}/bin"
      LOG_FILE="{{ database_backup_directory }}/{{ item.database }}-backup.log"
      exec >> "$LOG_FILE" 2>&1
      echo "=== Backup started at $(date) ==="
      PGPASSFILE='/var/lib/postgresql/.pgpass'
      DATE="$(date +%Y-%m-%d)"
      TIMESTAMP="$(date +%Y-%m-%dT%H:%M:%SZ)"
      YEAR=$(date +%Y)
      MONTH=$(date +%m)
      DAY=$(date +%d)
      BACKUP_FILE="{{ database_backup_directory }}/{{ item.database }}-${TIMESTAMP}.backup.gz"
      # Create backup using PostgreSQL command line backup database method
      "$PG_BIN_PATH"/pg_dump -U {{ postgresql.root_user }}{% if postgresql.host|default('')|length > 0 %} -h {{ postgresql.host }}{% endif %} -d {{ item.database }} \
       -F custom -Z 0 --create --clean --if-exists \
        | pigz -p {{ ansible_processor_vcpus }} > "$BACKUP_FILE"
      # Upload daily backup
      aws s3 cp "$BACKUP_FILE" \
        s3://{{ s3.bucket }}/postgres-backups/{{ item.database }}/daily/${DATE}/ \
        {% if s3.endpoint_url is defined %}--endpoint-url {{ s3.endpoint_url }}{% endif %} --region {{ s3.region }}
      # If it's the 1st of the month -> monthly backup
      if [ "$DAY" == "01" ]; then
        aws s3 cp "$BACKUP_FILE" \
          s3://{{ s3.bucket }}/postgres-backups/{{ item.database }}/monthly/${YEAR}-${MONTH}/ \
          {% if s3.endpoint_url is defined %}--endpoint-url {{ s3.endpoint_url }}{% endif %} --region {{ s3.region }}
      fi
      # If it's the 1st of Jan/Apr/Jul/Oct -> quarterly backup
      if [[ "$MONTH" =~ ^(01|04|07|10)$ && "$DAY" == "01" ]]; then
        aws s3 cp "$BACKUP_FILE" \
          s3://{{ s3.bucket }}/postgres-backups/{{ item.database }}/quarterly/${YEAR}-${MONTH}/ \
          {% if s3.endpoint_url is defined %}--endpoint-url {{ 
s3.endpoint_url }}{% endif %} --region {{ s3.region }}
      fi
      # Cleanup old local backups (keep 3 days)
      find {{ database_backup_directory }} -type f -name "*.backup.gz" -mtime +3 -delete
      echo "=== Backup finished at $(date) ==="
  loop: "{{ database_backup }}"
  loop_control:
    label: "{{ item.database }}"

- name: Configure backup cron job - How to backup PostgreSQL database automatically
  cron:
    name: "PostgreSQL Backup {{ item.database }}"
    minute: "{{ item.cron_time.split()[0] }}"
    hour: "{{ item.cron_time.split()[1] }}"
    day: "{{ item.cron_time.split()[2] }}"
    month: "{{ item.cron_time.split()[3] }}"
    weekday: "{{ item.cron_time.split()[4] }}"
    job: "/usr/local/bin/db_backup_{{ item.database }}.sh"
    user: postgres
    state: "{{ 'present' if item.cron_enabled else 'absent' }}"
  loop: "{{ database_backup }}"
  loop_control:
    label: "{{ item.database }}"

- name: Create lifecycle policy JSON for S3
  template:
    src: "s3_lifecycle.json.j2"
    dest: /tmp/s3_lifecycle.json
    owner: postgres
    group: postgres
    mode: '0644'

- name: Apply lifecycle policy to S3 bucket
  become: true
  become_user: postgres
  shell: |
    aws s3api put-bucket-lifecycle-configuration \
      --bucket {{ s3.bucket }} \
      {% if s3.endpoint_url is defined %}--endpoint-url {{ s3.endpoint_url }}{% endif %} \
      --region {{ s3.region }} \
      --lifecycle-configuration file:///tmp/s3_lifecycle.json




Create roles/postgres-backup/templates/s3_lifecycle.json.j2 for S3 lifecycle management:

{
  "Rules": [
    {% for item in database_backup %}
    {
      "ID": "DailyRetention-{{ item.database }}",
      "Status": "Enabled",
      "Filter": { "Prefix": "postgres-backups/{{ item.database }}/daily/" },
      "Expiration": { "Days": 35 }
    },
    {
      "ID": "MonthlyRetention-{{ item.database }}",
      "Status": "Enabled",
      "Filter": { "Prefix": "postgres-backups/{{ item.database }}/monthly/" },
      "Expiration": { "Days": 365 }
    },
    {
      "ID": "QuarterlyRetention-{{ item.database }}",
      "Status": "Enabled",
      "Filter": { "Prefix": "postgres-backups/{{ item.database }}/quarterly/" },
      "Expiration": { "Days": 1095 }
    }{% if not loop.last %},{% endif %}
    {% endfor %}
  ]
}

Restoration Playbook – How to Restore PostgreSQL Database

Create roles/postgres-restore/tasks/main.yml for database restoration:

---
- name: Ensure restore directory exists
  file:
    path: "{{ database_restore_directory }}"
    state: directory
    owner: postgres
    group: postgres
    mode: '0750'

- name: Install required packages for PostgreSQL restore database from backup
  package:
    name:
      - pigz
      - unzip
      - curl
    state: present

- name: Download AWS CLI v2
  get_url:
    url: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
    dest: /tmp/awscliv2.zip
    mode: '0644'

- name: Unzip AWS CLI v2
  unarchive:
    src: /tmp/awscliv2.zip
    dest: /tmp/
    remote_src: yes

- name: Install AWS CLI v2
  command: /tmp/aws/install --update
  args:
    creates: /usr/local/bin/aws

- name: Create AWS credentials directory
  file:
    path: /var/lib/postgresql/.aws
    state: directory
    owner: postgres
    group: postgres
    mode: '0700'

- name: Configure AWS CLI credentials for postgres user
  copy:
    dest: /var/lib/postgresql/.aws/credentials
    content: |
      [default]
      aws_access_key_id = {{ s3_access_key }}
      aws_secret_access_key = {{ s3_secret_key }}
    owner: postgres
    group: postgres
    mode: '0600'

- name: Configure AWS CLI config for postgres user
  copy:
    dest: /var/lib/postgresql/.aws/config
    content: |
      [default]
      region = {{ s3.region }}
    owner: postgres
    group: postgres
    mode: '0600'

- name: Create .pgpass file if not present
  copy:
    dest: /var/lib/postgresql/.pgpass
    content: |
      {{ postgresql.host }}:*:*:{{ postgresql.root_user }}:{{ postgres_root_password }}
    owner: postgres
    group: postgres
    mode: '0600'
    force: no

- name: Download backup from S3 - How to restore PostgreSQL database backup in linux
  become: true
  become_user: postgres
  command: >
    aws s3 cp
    s3://{{ s3.bucket }}/{{ item.database_path }}/{{ item.restore_type }}/{{ item.date }}/{{ item.restore_file }}
    {{ database_restore_directory }}/{{ item.restore_file }}
    {% if s3.endpoint_url is defined %}--endpoint-url {{ s3.endpoint_url }}{% endif %}
    --region {{ s3.region }}
  loop: "{{ database_restore }}"
  loop_control:
    label: "{{ item.database }}"
  when: item.restore_backup | default(false)

- name: Terminate existing connections before restore
  become: true
  become_user: postgres
  shell: |
    psql -U {{ postgresql.root_user }} -h {{ postgresql.host }} -d postgres -c "
    SELECT pg_terminate_backend(pid)
    FROM pg_stat_activity
    WHERE datname = '{{ item.database }}'
    AND pid <> pg_backend_pid();"
  environment:
    PGPASSWORD: "{{ postgres_root_password }}"
  loop: "{{ database_restore }}"
  loop_control:
    label: "{{ item.database }}"
  when: item.restore_backup | default(false)

- name: Restore PostgreSQL database from custom-format dump - PostgreSQL restore database from backup
  become: true
  become_user: postgres
  vars:
    pg_version: "{{ postgresql.version | default('14') }}"
    pg_bin_path: "/usr/lib/postgresql/{{ postgresql.version }}/bin"
  shell: |
    pigz -dc {{ database_restore_directory }}/{{ item.restore_file }} | \
      {{ pg_bin_path }}/pg_restore \
        -U {{ postgresql.root_user }} \
        -h {{ postgresql.host }} \
        -d {{ item.database }} \
        --clean --if-exists --no-comments --disable-triggers --no-owner --no-acl
  environment:
    PGPASSWORD: "{{ postgres_root_password }}"
  loop: "{{ database_restore }}"
  loop_control:
    label: "{{ item.database }}"
  when: item.restore_backup | default(false)

- name: Remove downloaded backup file after restoration
  file:
    path: "{{ database_restore_directory }}/{{ item.restore_file }}"
    state: absent
  loop: "{{ database_restore }}"
  loop_control:
    label: "{{ item.database }}"
  when: item.restore_backup | default(false)

- name: Change ownership of all tables and sequences to app user
  community.postgresql.postgresql_query:
    db: "{{ item.database }}"
    query: |
      DO
      $$
      DECLARE r RECORD;
      BEGIN
        FOR r IN SELECT tablename FROM pg_tables WHERE schemaname='public'
        LOOP
          EXECUTE 'ALTER TABLE public.' || quote_ident(r.tablename) || ' OWNER TO {{ item.user }}';
        END LOOP;
        FOR r IN SELECT sequencename FROM pg_sequences WHERE schemaname='public'
        LOOP
          EXECUTE 'ALTER SEQUENCE public.' || quote_ident(r.sequencename) || ' OWNER TO {{ item.user }}';
        END LOOP;
      END;
      $$;
    login_user: "{{ postgresql.root_user }}"
    login_password: "{{ postgres_root_password }}"
    login_host: "{{ postgresql.host }}"
  loop: "{{ postgres_access }}"
  loop_control:
    label: "{{ item.database }}"

Running the PostgreSQL Backup and Restore Playbooks

Setting Up the Inventory

Create an inventory file inventories/production/hosts:

[postgres_servers]
your-postgres-server ansible_host=your.server.ip.address ansible_user=your_ssh_user
[postgres_servers:vars]
ansible_python_interpreter=/usr/bin/python3

Executing the Playbooks

  1. Run the complete setup for PostgreSQL backup and restore database:
ansible-playbook -i inventories/production/hosts playbooks/main.yml --ask-vault-pass

2. Run just the backup (after initial setup):

ansible-playbook -i inventories/production/hosts playbooks/backup.yml --ask-vault-pass

3. Run a restoration when you need to restore database from backup PostgreSQL:

ansible-playbook -i inventories/production/hosts playbooks/restore.yml --ask-vault-pass

Key Features of Our PostgreSQL Backup Automation Solution

1. Tiered Backup Retention Strategy

  • Daily backups: Kept for 35 days
  • Monthly backups: Kept for 1 year
  • Quarterly backups: Kept for 3 years

This approach provides the best way to backup PostgreSQL database by balancing storage costs with recovery options.

2. Efficient Compression

Uses pigz (parallel gzip) for faster compression, especially important for PostgreSQL backup large database operations.

3. Enhanced Security

  • All secrets encrypted with Ansible Vault
  • Minimal privileges for database users
  • Secure S3 access with IAM-style credentials

4. Flexibility and Compatibility

  • Works with AWS S3 and S3-compatible services like Hetzner
  • Configurable backup schedules per database
  • Customizable retention policies

5. Comprehensive PostgreSQL Database Backup and Restore

  • Supports both full database backups and selective restoration
  • Includes proper connection handling during restoration
  • Maintains proper ownership and permissions after restoring

Best Practices for PostgreSQL Database Backup

  • Test Your Backups Regularly: A backup isn’t valid until you’ve successfully restored from it
  • Monitor Backup Operations: Implement logging and alerting for backup failures
  • Secure Your Backup Files: Encrypt backups both in transit and at rest
  • Follow the 3-2-1 Rule: 3 copies, 2 different media, 1 offsite
  • Document Your Procedures: Ensure multiple team members understand the backup and restore process

Troubleshooting Common PostgreSQL Backup Issues

  • Permission Errors: Ensure the postgres user has proper access rights
  • Insufficient Disk Space: Monitor your backup directory and S3 usage
  • Network Issues: Check connectivity to S3 endpoints
  • Version Mismatches: Keep your PostgreSQL and pg_dump versions consistent
  • Authentication Problems: Verify your .pgpass file and PostgreSQL authentication settings

Conclusion

Implementing a robust PostgreSQL backup database strategy is essential for any organization relying on PostgreSQL databases. This Ansible-based solution provides a comprehensive, automated approach to how to backup and restore PostgreSQL database operations with enterprise-grade features.

By leveraging this solution, you ensure:

  • Business continuity with reliable backups
  • Operational efficiency through automation
  • Cost optimization with tiered storage strategies
  • Security compliance with encrypted backups and proper access controls

Start automating your PostgreSQL backups today and sleep better knowing your data is protected!

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *