Skip to main content

Command Palette

Search for a command to run...

Automated Database Backups for Strapi v5: AWS S3 Setup

Part 4 of "Building a Complete Deployment Environment for Strapi v5: A Practical Series"

Updated
27 min read
Automated Database Backups for Strapi v5:  AWS S3 Setup

Series Navigation:

New to the series? Each article works standalone, but if you haven't set up your DigitalOcean deployment yet, start with Parts 1-3.


Your Strapi backend is running smoothly, you've got HTTPS working, and everything looks professional. But there's one critical piece missing: what happens when something goes wrong?

Database corruption, accidental deletions, botched deployments, server failures, these scenarios happen more often than you'd think. Without proper backups, you're looking at hours of manual recovery work or potentially losing data entirely.

In this article, we're setting up automated daily backups to AWS S3. The entire system costs about $0.001/month (less than a penny) for a small database, and provides a reliable backup strategy that's appropriate for staging environments and early-stage production. If you need more frequent backups, you can easily adjust the schedule to run every few hours or even hourly without changing the core setup.

Let's build the safety net.


Why Bother with Backups?

You might be thinking, "It's just a staging environment, do I really need backups?"

Here's what can go wrong without them:

  • Deploy a bad migration that corrupts your data

  • Accidentally drop the wrong table while testing

  • Server crashes and Docker volume gets corrupted

  • Need to test a restore procedure (you DO test your restores, right?)

  • Want to roll back to yesterday's data after finding a bug

Without backups, you're rebuilding everything from scratch. With backups, you're back online in 5 minutes.

Plus, setting this up now means you understand backup procedures before moving to production. Trust me, you don't want to learn this stuff during an actual emergency.


What We're Building

Here's what the complete backup system includes:

Automated Daily Backups:

  • Runs automatically at 2:00 AM every day via cron

  • Creates PostgreSQL database dump

  • Compresses the backup (typically 80-90% size reduction)

  • Uploads to AWS S3 with intelligent storage classes

  • Verifies backup integrity

  • Cleans up old local backups (7-day retention)

Smart Storage Management:

  • First 30 days: S3 Standard-IA (Infrequent Access)

  • After 30 days: Automatically moves to Glacier

  • After 120 days: Automatically deletes

  • Keeps 7 days of local backups for quick restores

Reliable Restore Process:

  • Can restore from either local or S3 backups

  • Handles database constraints properly (this was tricky to get right)

  • Creates safety backup before restoring

  • Verifies restoration worked

The Cost: For a typical small Strapi database (5-50MB):

  • Storage: ~$0.0005/month

  • Requests: ~$0.0005/month

  • Total: About $0.001/month (one-tenth of a penny)

Even if your database grows to 1GB, you're still under $0.10/month. This is essentially free disaster recovery.


Prerequisites

Before we start, make sure you have:

  • Parts 1-3 completed (Strapi running on DigitalOcean with PostgreSQL)

  • AWS account (free tier covers this easily)

  • SSH access to your droplet

  • Basic terminal skills

  • About 60-90 minutes for complete setup and testing

Don't have an AWS account yet? You'll need one for this. AWS offers a free tier that covers way more than what we'll use. Sign up at aws.amazon.com


Understanding the Backup Strategy

Before diving into setup, let's talk about what makes a good backup strategy.

The 3-2-1 Rule

The gold standard in backups is the 3-2-1 rule:

  • 3 copies of your data (original + 2 backups)

  • 2 different storage types (local disk + cloud)

  • 1 offsite copy (S3 in a different location than your droplet)

That's exactly what we're building here.

Why S3 Storage Classes Matter

AWS S3 has different storage tiers with different costs:

S3 Standard ($0.023/GB/month):

  • For frequently accessed data

  • Instant retrieval

  • Most expensive

S3 Standard-IA ($0.0125/GB/month):

  • For infrequent access (perfect for backups)

  • Instant retrieval

  • ~50% cheaper than Standard

S3 Glacier ($0.004/GB/month):

  • For archival storage

  • Takes 1-5 minutes to retrieve

  • ~80% cheaper than Standard

Our Strategy:

  • Store new backups in Standard-IA (instant access if needed)

  • After 30 days, move to Glacier (cheaper, rarely need old backups instantly)

  • After 120 days, delete (4 months of history is plenty for staging)

This gives you quick access to recent backups while keeping costs minimal for older ones.

Why 30 days before Glacier? S3 charges a minimum 30-day storage fee. Moving to Glacier earlier actually costs more due to early deletion fees.


Step 1: AWS Account Setup

Let's get your AWS account configured properly.

Create IAM Policy First

Before creating the user, we need to create a custom policy that defines exactly what permissions our backup user will have.

Steps:

  1. AWS Console → IAM → Policies → Create Policy

  2. Switch to JSON tab and paste this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket",
                "s3:PutObjectAcl",
                "s3:GetBucketLocation",
                "s3:PutLifecycleConfiguration",
                "s3:GetLifecycleConfiguration"
            ],
            "Resource": [
                "arn:aws:s3:::your-backup-bucket-*",
                "arn:aws:s3:::your-backup-bucket-*/*"
            ]
        }
    ]
}
  1. Click "Next"

  2. Policy name: StrapiBackupPolicy

  3. Description: "Allows backup operations to S3 buckets with lifecycle management"

  4. Click "Create policy"

Important: Notice the PutLifecycleConfiguration and GetLifecycleConfiguration permissions. These are needed for the lifecycle policy we'll set up later. Missing these permissions is a common gotcha.

Create IAM User

Now let's create the user and attach our policy.

Steps:

  1. AWS Console → IAM → Users → Create User

  2. User details:

    • Username: strapi-backup-user

    • Check: "Provide user access to the AWS Management Console" - UNCHECK THIS

    • We only need programmatic access (API keys), not console access

  3. Set permissions:

    • Select: "Attach policies directly"

    • Search for: StrapiBackupPolicy

    • Check the box next to your policy

  4. Review and create user

  5. Create access key:

    • After user creation, click on the user

    • Go to "Security credentials" tab

    • Click "Create access key"

    • Choose: "Command Line Interface (CLI)"

    • Check the confirmation box

    • Click "Create access key"

  6. Save the credentials!

    • Access Key ID

    • Secret Access Key

Copy these somewhere safe. You won't see the secret key again.

Don't commit these keys to Git. Don't paste them in Slack. Don't email them. These are like passwords.

Why this approach?

We're attaching the policy directly to the user rather than creating a group. For a single backup user in a staging environment, this is simpler and perfectly appropriate. If you're setting up multiple users with similar permissions, AWS recommends using groups, but that's overkill for our use case.


Step 2: Create S3 Bucket

Now let's create the actual storage bucket for our backups.

Create the Bucket

  1. S3 → Create bucket

  2. Bucket settings:

Name: your-app-backups-staging-YYYYMMDD
Region: Choose one close to your users

About bucket names: They must be globally unique across all AWS accounts. That's why I suggest adding a date - strapi-backups-20251208 will probably be available even if strapi-backups isn't.

About regions: Pick a region close to your users (or your DigitalOcean droplet). This reduces latency and costs for data transfer. If you're in Europe, use eu-west-1. In US, use us-east-1 or us-west-2. In Asia, use ap-south-1 or ap-southeast-1.

  1. Object Ownership:

    • Select: ACLs disabled (recommended)

    • Bucket owner enforced

  2. Block Public Access:

    • Enable all four checkboxes

    • We definitely don't want public backups

  3. Versioning:

    • Enable versioning

    • This protects against accidental overwrites

Note: Versioning doesn't cost extra for our setup since each backup has a unique timestamp-based filename. It's just a safety net in case you manually overwrite something.

  1. Encryption:

    • Enable

    • Encryption type: SSE-S3 (Amazon S3 managed keys)

    • Bucket Key: Enable

Note: Bucket Key is for SSE-KMS encryption optimization. Since we're using SSE-S3, this setting doesn't affect us, but it's fine to leave enabled.

  1. Object Lock:

    • Disable (not needed for our use case)

Click "Create bucket."

Why These Settings?

Let me explain what we just configured:

ACLs disabled: Modern S3 best practice. We control access through IAM policies, not bucket ACLs.

Block public access: Your database backups should never be publicly accessible. Ever.

Versioning: If you accidentally overwrite a backup, S3 keeps the old version. Extra safety.

Encryption: Your backups are encrypted at rest. If someone physically steals AWS's hard drives (unlikely, but still), they can't read your data.


Step 3: Install AWS CLI on Your Droplet

Now let's get your server set up to talk to AWS.

Connect to Your Droplet

ssh root@YOUR_DROPLET_IP

# Switch to deploy user
su - deploy
cd /opt/strapi-backend

Note: We're connecting as root first because we set up SSH keys for root in Part 2. If you want direct SSH access as the deploy user, you can add your SSH key to /home/deploy/.ssh/authorized_keys.

We'll do all the backup setup as the deploy user, not root.

Check Your Architecture

This step is important and easy to miss. The AWS CLI download is architecture-specific.

uname -m

You'll see either:

  • x86_64 - Standard Intel/AMD processors (most common)

  • aarch64 - ARM64 processors (some DigitalOcean droplets)

Install AWS CLI

For x86_64 (most common):

# Download installer
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

# Install unzip if needed
sudo apt install unzip -y

# Unzip
unzip awscliv2.zip

# Install (use --update flag if AWS CLI already exists)
sudo ./aws/install --update

# Verify installation
aws --version

For aarch64 (ARM64):

# Download ARM version
curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"

# Rest is the same
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install --update
aws --version

You should see something like:

aws-cli/2.x.x Python/3.x.x Linux/x.x.x

Why the --update flag? Some DigitalOcean droplets come with an older AWS CLI version pre-installed. The --update flag safely replaces it with the latest version, or installs fresh if nothing exists. Without this flag, the installer fails if a previous version is detected.

Configure AWS Credentials

Now let's connect your server to your AWS account:

aws configure

You'll be prompted for:

AWS Access Key ID: [paste your key from Step 1]
AWS Secret Access Key: [paste your secret key from Step 1]
Default region name: [your bucket region, e.g., us-east-1]
Default output format: json

Test the connection:

aws s3 ls

You should see your bucket listed. If you get an error, double-check your access keys and region.


Step 4: Create the Backup Script

Now for the main event - the actual backup automation.

Create Backup Directory

# As deploy user
mkdir -p /opt/strapi-backend/backups
chmod 755 /opt/strapi-backend/backups

Note: If you used a different directory for your Strapi project (like /var/www/strapi-backend or /home/deploy/strapi), adjust this path accordingly. Just make sure to use the same path consistently throughout the backup script.

This is where we'll store local copies of backups for 7 days.

Create the Backup Script

nano /opt/strapi-backend/backup-script.sh

Paste this complete script:

#!/bin/bash
# =============================================================================
# Daily PostgreSQL Backup Script for Strapi
# Automated backups to AWS S3 with intelligent storage management
# =============================================================================

# Configuration - UPDATE THESE VALUES TO MATCH YOUR SETUP
BACKUP_DIR="/opt/strapi-backend/backups"
S3_BUCKET="your-backup-bucket-name"  # UPDATE THIS!
COMPOSE_FILE="/opt/strapi-backend/docker-compose.stg.yml"
ENV_FILE="/opt/strapi-backend/.env.stg"
DATABASE_NAME="strapi_staging"  # UPDATE THIS!
DATABASE_USER="postgres"
DATABASE_CONTAINER="strapi-db"  # UPDATE THIS!
LOCAL_RETENTION_DAYS=7

# Create timestamp
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
BACKUP_FILE="strapi_backup_${TIMESTAMP}.sql"
BACKUP_PATH="${BACKUP_DIR}/${BACKUP_FILE}"

# Function to log messages
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a ${BACKUP_DIR}/backup.log
}

# Function to send notifications
send_notification() {
    log "$1: $2"
    # TODO: Add Slack/Discord notifications here if needed
}

log "Starting daily backup process..."

# Check if database container is running
if ! docker compose -f $COMPOSE_FILE ps $DATABASE_CONTAINER | grep -q "Up"; then
    log "ERROR: Database container is not running"
    send_notification "BACKUP FAILED" "Database container not running"
    exit 1
fi

# Create database backup
log "Creating database backup..."
if docker compose -f $COMPOSE_FILE --env-file $ENV_FILE exec -T $DATABASE_CONTAINER \
    pg_dump -U $DATABASE_USER -d $DATABASE_NAME > $BACKUP_PATH; then
    log "Database backup created: $BACKUP_FILE"
else
    log "ERROR: Failed to create database backup"
    send_notification "BACKUP FAILED" "pg_dump command failed"
    exit 1
fi

# Check if backup file is not empty
if [ ! -s "$BACKUP_PATH" ]; then
    log "ERROR: Backup file is empty"
    send_notification "BACKUP FAILED" "Empty backup file created"
    exit 1
fi

# Compress backup
log "Compressing backup..."
gzip $BACKUP_PATH
COMPRESSED_FILE="${BACKUP_PATH}.gz"

# Verify compression succeeded
if [ ! -f "$COMPRESSED_FILE" ]; then
    log "ERROR: Compression failed"
    send_notification "BACKUP FAILED" "Backup compression failed"
    exit 1
fi

# Upload to S3 with Standard-IA storage class
log "Uploading to S3 (Standard-IA storage)..."
S3_KEY="backups/$(date +%Y/%m)/${BACKUP_FILE}.gz"
if aws s3 cp $COMPRESSED_FILE s3://$S3_BUCKET/$S3_KEY --storage-class STANDARD_IA; then
    log "Backup uploaded to S3 successfully"
    send_notification "BACKUP SUCCESS" "Backup uploaded: $S3_KEY"
else
    log "ERROR: Failed to upload backup to S3"
    send_notification "BACKUP FAILED" "S3 upload failed"
    exit 1
fi

# Clean up old local backups (keep last 7 days)
log "Cleaning up old local backups (${LOCAL_RETENTION_DAYS} days)..."
find $BACKUP_DIR -name "strapi_backup_*.sql.gz" -mtime +$LOCAL_RETENTION_DAYS -delete
find $BACKUP_DIR -name "strapi_backup_*.sql" -mtime +$LOCAL_RETENTION_DAYS -delete

# Verify backup integrity
log "Verifying backup integrity..."
if gunzip -t $COMPRESSED_FILE; then
    log "Backup file integrity verified"
else
    log "WARNING: Backup file may be corrupted"
    send_notification "BACKUP WARNING" "Backup file integrity check failed"
fi

# Summary
BACKUP_SIZE=$(du -h $COMPRESSED_FILE | cut -f1)
log "=== Backup Summary ==="
log "File: $BACKUP_FILE.gz"
log "Size: $BACKUP_SIZE"
log "Local: $COMPRESSED_FILE"
log "S3: s3://$S3_BUCKET/$S3_KEY"
log "Backup process completed successfully"

Save and exit (Ctrl+X, Y, Enter).

Update the Configuration

Before the script will work, you need to update it with your actual values:

# Still in the same file (or reopen with nano)
nano /opt/strapi-backend/backup-script.sh

Find and update these lines:

S3_BUCKET="your-backup-bucket-name"  # Change to your actual bucket name
DATABASE_NAME="strapi_staging"       # Change to your database name
DATABASE_CONTAINER="strapi-db"        # Change to your service name (not container_name)

How to find your values:

# Check your docker-compose.stg.yml for service name and container name
grep -B1 "container_name" docker-compose.stg.yml

# This will show you both:
# genkiStrapi:                    <- This is the SERVICE NAME
#   container_name: strapi-backend <- This is the CONTAINER NAME

# Check your .env.stg for database name
grep "DATABASE_NAME" .env.stg

Save your changes.

Make the Script Executable

chmod +x /opt/strapi-backend/backup-script.sh

Important note about service names vs container names:

In Docker Compose, there's a difference between the service name (defined in services: section) and the container_name (the optional name for the actual container).

For the backup script, you should use the service name. If you get errors when running the script, try using the service name instead of the container_name.

Pro tip: Keep your service name and container_name the same (or similar) to avoid confusion. For example:

Understanding the Backup Script

Let me walk through what this script actually does:

1. Configuration Section: Sets up all the variables we need - backup directory, S3 bucket, database details, etc.

2. Logging Function: Every action gets timestamped and logged to backup.log. When something breaks at 3 AM, these logs tell you exactly what happened.

3. Container Check: Before attempting a backup, verify the database container is actually running. No point trying to backup a stopped database.

4. Database Dump: Uses pg_dump to create a complete SQL dump of your database. The -T flag makes it work in automation (no interactive prompts).

5. Empty File Check: Verify the backup actually contains data. I've seen backups "succeed" but create 0-byte files due to permission issues.

6. Compression: Gzip typically reduces PostgreSQL dumps by 80-90%. A 50MB database becomes a 5MB backup.

7. S3 Upload: Upload directly to Standard-IA storage class (cheaper than Standard, same instant retrieval). We organize by year/month for easy browsing.

8. Local Cleanup: Delete backups older than 7 days from local storage. Keeps your disk from filling up.

9. Integrity Check: Verify the compressed file isn't corrupted. Better to know now than during an emergency restore.

Why local retention matters: S3 retrieval costs money (pennies, but still). Keeping a week of local backups means quick restores for recent issues without touching S3.


Step 5: Create the Restore Script

Having backups is great. Being able to actually restore them is better. This script handles the tricky parts of database restoration.

nano /opt/strapi-backend/restore-script.sh

Paste this complete script:

#!/bin/bash
# =============================================================================
# PostgreSQL Restore Script for Strapi
# Safely restores from local or S3 backups with constraint handling
# =============================================================================

BACKUP_DIR="/opt/strapi-backend/backups"
S3_BUCKET="your-backup-bucket-name"  # UPDATE THIS!
COMPOSE_FILE="/opt/strapi-backend/docker-compose.stg.yml"
ENV_FILE="/opt/strapi-backend/.env.stg"
DATABASE_NAME="strapi_staging"  # UPDATE THIS!
DATABASE_USER="postgres"
DATABASE_CONTAINER="strapi-db"  # UPDATE THIS!
STRAPI_CONTAINER="strapi-backend"  # UPDATE THIS!

# Function to log messages
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}

# Function to show usage
show_usage() {
    echo "Usage: $0 [backup_file] [source]"
    echo ""
    echo "Examples:"
    echo "  $0 strapi_backup_20241214_120000.sql.gz local"
    echo "  $0 strapi_backup_20241214_120000.sql.gz s3"
    echo "  $0 list  # List available backups"
    echo ""
    exit 1
}

# List available backups
list_backups() {
    echo "=== Local Backups ==="
    ls -lh $BACKUP_DIR/strapi_backup_*.sql.gz 2>/dev/null || echo "No local backups found"

    echo ""
    echo "=== S3 Backups (Last 10) ==="
    aws s3 ls s3://$S3_BUCKET/backups/ --recursive | grep "\.sql\.gz$" | tail -10 || echo "No S3 backups found"
}

# Clean database function (handles constraints properly)
clean_database() {
    log "Preparing database for restore..."

    # Stop Strapi first
    log "Stopping Strapi container..."
    docker compose -f $COMPOSE_FILE stop $STRAPI_CONTAINER

    # Drop existing database
    log "Dropping existing database..."
    if docker compose -f $COMPOSE_FILE --env-file $ENV_FILE exec -T $DATABASE_CONTAINER \
        psql -U $DATABASE_USER -c "DROP DATABASE IF EXISTS \"$DATABASE_NAME\";"; then
        log "Database dropped successfully"
    else
        log "WARNING: Could not drop database (may not exist)"
    fi

    # Create fresh database
    log "Creating fresh database..."
    if docker compose -f $COMPOSE_FILE --env-file $ENV_FILE exec -T $DATABASE_CONTAINER \
        psql -U $DATABASE_USER -c "CREATE DATABASE \"$DATABASE_NAME\";"; then
        log "Database created successfully"
        return 0
    else
        log "ERROR: Failed to create database"
        return 1
    fi
}

# Main script
if [ "$1" = "list" ]; then
    list_backups
    exit 0
fi

if [ $# -ne 2 ]; then
    show_usage
fi

BACKUP_FILE="$1"
SOURCE="$2"
BACKUP_FILENAME=$(basename "$BACKUP_FILE")
RESTORE_PATH="$BACKUP_DIR/$BACKUP_FILENAME"

log "Starting restore process..."
log "Backup file: $BACKUP_FILENAME"
log "Source: $SOURCE"

# Download from S3 if needed
if [ "$SOURCE" = "s3" ]; then
    log "Downloading backup from S3..."

    # Find the file in S3
    S3_OBJECT=$(aws s3 ls s3://$S3_BUCKET/backups/ --recursive | grep "$BACKUP_FILENAME" | head -1 | awk '{print $4}')

    if [ -z "$S3_OBJECT" ]; then
        log "ERROR: Backup file not found in S3"
        exit 1
    fi

    log "Found S3 object: $S3_OBJECT"

    # Check if file is in Glacier
    STORAGE_CLASS=$(aws s3api head-object --bucket $S3_BUCKET --key "$S3_OBJECT" --query 'StorageClass' --output text 2>/dev/null)
    if [ "$STORAGE_CLASS" = "GLACIER" ]; then
        log "WARNING: Backup is in Glacier storage"
        log "Retrieval may take 1-5 minutes..."
        log "If this fails, you may need to initiate a restore first"
    fi

    if aws s3 cp s3://$S3_BUCKET/$S3_OBJECT $RESTORE_PATH; then
        log "Backup downloaded from S3"
    else
        log "ERROR: Failed to download backup from S3"
        exit 1
    fi
elif [ "$SOURCE" = "local" ]; then
    if [ ! -f "$RESTORE_PATH" ]; then
        log "ERROR: Local backup file not found: $RESTORE_PATH"
        exit 1
    fi
    log "Using local backup file"
else
    log "ERROR: Invalid source. Use 'local' or 's3'"
    exit 1
fi

# Create pre-restore backup for safety
log "Creating pre-restore safety backup..."
PRERESTORE_BACKUP="prerestore_$(date +%Y%m%d_%H%M%S).sql"
if docker compose -f $COMPOSE_FILE --env-file $ENV_FILE exec -T $DATABASE_CONTAINER \
    pg_dump -U $DATABASE_USER -d $DATABASE_NAME > "$BACKUP_DIR/$PRERESTORE_BACKUP" 2>/dev/null; then
    log "Pre-restore backup created: $PRERESTORE_BACKUP"
    gzip "$BACKUP_DIR/$PRERESTORE_BACKUP"
else
    log "WARNING: Could not create pre-restore backup"
fi

# Clean database to avoid constraint conflicts
if ! clean_database; then
    log "ERROR: Database cleaning failed"
    exit 1
fi

# Prepare restore file (decompress if needed)
FINAL_RESTORE_PATH="$RESTORE_PATH"
if [[ "$BACKUP_FILENAME" == *.gz ]]; then
    log "Decompressing backup..."
    FINAL_RESTORE_PATH="$BACKUP_DIR/temp_restore_$(date +%s).sql"
    if gunzip -c "$RESTORE_PATH" > "$FINAL_RESTORE_PATH"; then
        log "Backup decompressed successfully"
    else
        log "ERROR: Failed to decompress backup"
        exit 1
    fi
fi

# Restore database
log "Restoring database from backup..."
if docker compose -f $COMPOSE_FILE --env-file $ENV_FILE exec -T $DATABASE_CONTAINER \
    psql -U $DATABASE_USER -d $DATABASE_NAME < "$FINAL_RESTORE_PATH"; then
    log "Database restored successfully"
else
    log "ERROR: Database restore failed"
    exit 1
fi

# Cleanup temporary files
if [[ "$FINAL_RESTORE_PATH" == *temp_restore* ]]; then
    rm -f "$FINAL_RESTORE_PATH"
fi

# Start Strapi
log "Starting Strapi container..."
docker compose -f $COMPOSE_FILE --env-file $ENV_FILE up -d $STRAPI_CONTAINER

# Wait and verify
log "Waiting for Strapi to start (30 seconds)..."
sleep 30

if curl -f http://localhost:1337/admin > /dev/null 2>&1; then
    log "✅ Strapi is responding - restore completed successfully!"
else
    log "⚠️  WARNING: Strapi may not be responding yet"
    log "Give it another minute and check: curl http://localhost:1337/admin"
fi

log "Restore process completed"
log "Pre-restore backup available at: $BACKUP_DIR/$PRERESTORE_BACKUP.gz"

Save and exit.

Update Restore Script Configuration

Same as before, update your specific values:

nano /opt/strapi-backend/restore-script.sh

Update these lines to match your setup:

S3_BUCKET="your-backup-bucket-name"
DATABASE_NAME="strapi_staging"
DATABASE_CONTAINER="strapi-db"        # Use service name
STRAPI_CONTAINER="strapi-backend"    # Use service name

Make it executable:

chmod +x /opt/strapi-backend/restore-script.sh

Understanding the Restore Script

The restore process is more complex than backup because we need to handle database constraints properly.

Why we drop and recreate the database:

Restoring directly into an existing database often fails with errors like:

ERROR: duplicate key value violates unique constraint
ERROR: relation already exists

These errors occur because PostgreSQL's constraints (primary keys, foreign keys, unique indexes) conflict with existing data. The clean solution is to drop the entire database and restore to a fresh one. This ensures a conflict-free restore every time.

The restore flow:

  1. Create safety backup: Before touching anything, backup current state

  2. Stop Strapi: Prevent new data writes during restore

  3. Drop database: Remove all existing data and constraints

  4. Create fresh database: Start with a clean slate

  5. Restore backup: Import the SQL dump

  6. Start Strapi: Bring the application back online

Pre-restore safety backup:

This is the "oh crap" insurance. If the restore goes wrong, you can restore back to the state right before you started.


Step 6: Set Up Lifecycle Policy

Now let's configure S3 to automatically move old backups to cheaper storage and eventually delete them.

Create the Lifecycle Policy

nano /tmp/lifecycle-policy.json

Paste this configuration:

{
    "Rules": [
        {
            "ID": "BackupLifecycleRule",
            "Status": "Enabled",
            "Filter": {
                "Prefix": "backups/"
            },
            "Transitions": [
                {
                    "Days": 30,
                    "StorageClass": "GLACIER"
                }
            ],
            "Expiration": {
                "Days": 120
            }
        }
    ]
}

What this does:

  • First 30 days: Backups stay in Standard-IA (instant retrieval if needed)

  • After 30 days: Automatically move to Glacier (much cheaper, takes 1-5 minutes to retrieve)

  • After 120 days: Automatically delete (4 months of history is plenty for staging)

Apply the policy:

# Replace with your actual bucket name
aws s3api put-bucket-lifecycle-configuration \
    --bucket your-backup-bucket-name \
    --lifecycle-configuration file:///tmp/lifecycle-policy.json

Verify it worked:

aws s3api get-bucket-lifecycle-configuration --bucket your-backup-bucket-name

You should see your lifecycle rule in the output.

Clean up:

rm /tmp/lifecycle-policy.json

Why 30 Days Before Glacier?

This timing is based on S3's minimum storage duration charges.

S3 has minimum storage duration requirements:

  • Standard-IA: Minimum 30 days

  • Glacier: Minimum 90 days

If you delete or move data before these minimums, you're still charged for the full minimum period. Moving to Glacier after only 7 days would result in:

  • 7 days of actual Standard-IA storage

  • 23 days of "unused" Standard-IA minimum charges

  • Plus Glacier storage costs

By waiting the full 30 days before transitioning to Glacier, you avoid these early deletion penalties and optimize your storage costs.


Step 7: Test the Backup System

Time to verify everything works before setting up automation. This step is crucial - don't skip it.

Test Manual Backup

Let's run a backup manually and watch what happens:

cd /opt/strapi-backend
./backup-script.sh

What you should see:

[2025-12-08 14:30:01] Starting daily backup process...
[2025-12-08 14:30:02] Creating database backup...
[2025-12-08 14:30:03] Database backup created: strapi_backup_20251208_143001.sql
[2025-12-08 14:30:03] Compressing backup...
[2025-12-08 14:30:04] Uploading to S3 (Standard-IA storage)...
[2025-12-08 14:30:06] Backup uploaded to S3 successfully
[2025-12-08 14:30:06] Cleaning up old local backups (7 days)...
[2025-12-08 14:30:06] Verifying backup integrity...
[2025-12-08 14:30:06] Backup file integrity verified
[2025-12-08 14:30:06] === Backup Summary ===
[2025-12-08 14:30:06] File: strapi_backup_20251208_143001.sql.gz
[2025-12-08 14:30:06] Size: 4.2M
[2025-12-08 14:30:06] Local: /opt/strapi-backend/backups/strapi_backup_20251208_143001.sql.gz
[2025-12-08 14:30:06] S3: s3://your-bucket/backups/2025/12/strapi_backup_20251208_143001.sql.gz
[2025-12-08 14:30:06] Backup process completed successfully

If you see errors, we'll troubleshoot in a moment.

Verify the backup file exists locally:

ls -lh /opt/strapi-backend/backups/

You should see your compressed backup file.

Verify it uploaded to S3:

aws s3 ls s3://your-backup-bucket-name/backups/$(date +%Y/%m)/

You should see your backup listed with its size.

Tip: You can also check this visually - just go to your S3 bucket in the AWS console and browse to the backups/2025/12/ folder to see your backup file.

Test Backup Contains Real Data

Let's make sure the backup actually has your data, not just schema:

# Find your latest backup
LATEST_BACKUP=$(ls -t /opt/strapi-backend/backups/strapi_backup_*.sql.gz | head -1)

# Check for data (COPY format)
zcat "$LATEST_BACKUP" | grep -c "COPY.*FROM stdin"

If you see a number greater than 0, your backup contains data. PostgreSQL uses COPY statements to efficiently bulk-insert data.

Some people expect to see INSERT statements. PostgreSQL's pg_dump uses COPY by default because it's much faster. Both are valid backup formats.


Step 8: Test the Restore Process

This is the most important test. A backup system that can't restore is useless.

Important: We're about to restore your database. This will replace all current data with the backup. Make sure you're okay with that, or test on a Friday afternoon when you can rebuild if needed.

List Available Backups

./restore-script.sh list

You should see both your local backup and the S3 backup you just created.

Test Local Restore

Let's test restoring from the local backup first (faster than S3):

Before restoring, let's create verification data:

  1. Open your Strapi admin panel: https://api.yourdomain.com/admin

  2. Go to Content Manager

  3. Create a test entry in any collection (like adding a test article or post)

  4. Note what you created - we'll check if it's gone after restore

This test entry should disappear after restore, proving we've successfully reverted to the backup state.

# Get the backup filename from the list command
./restore-script.sh strapi_backup_20251208_143001.sql.gz local

What you should see:

[2025-12-08 14:35:01] Starting restore process...
[2025-12-08 14:35:01] Backup file: strapi_backup_20251208_143001.sql.gz
[2025-12-08 14:35:01] Source: local
[2025-12-08 14:35:01] Using local backup file
[2025-12-08 14:35:01] Creating pre-restore safety backup...
[2025-12-08 14:35:02] Pre-restore backup created: prerestore_20251208_143501.sql
[2025-12-08 14:35:02] Preparing database for restore...
[2025-12-08 14:35:02] Stopping Strapi container...
[2025-12-08 14:35:03] Dropping existing database...
[2025-12-08 14:35:03] Database dropped successfully
[2025-12-08 14:35:03] Creating fresh database...
[2025-12-08 14:35:03] Database created successfully
[2025-12-08 14:35:03] Decompressing backup...
[2025-12-08 14:35:04] Backup decompressed successfully
[2025-12-08 14:35:04] Restoring database from backup...
[2025-12-08 14:35:06] Database restored successfully
[2025-12-08 14:35:06] Starting Strapi container...
[2025-12-08 14:35:07] Waiting for Strapi to start (30 seconds)...
[2025-12-08 14:35:37] ✅ Strapi is responding - restore completed successfully!
[2025-12-08 14:35:37] Restore process completed
[2025-12-08 14:35:37] Pre-restore backup available at: /opt/strapi-backend/backups/prerestore_20251208_143501.sql.gz

Verify Strapi is actually working:

# Test from the server
curl http://localhost:1337/admin

# Or open in your browser
# https://api.yourdomain.com/admin

You should see your Strapi admin panel, and your data should be intact.

Verify the restore actually worked:

  1. Log into your Strapi admin panel

  2. Go to Content Manager

  3. Look for the test entry you created earlier

  4. It should be gone - this confirms we successfully restored to the backup state before you created that entry

If the test entry is missing, congratulations! Your restore process works correctly.

Test S3 Restore

Now let's test restoring from S3:

./restore-script.sh strapi_backup_20251208_143001.sql.gz s3

The process is the same, but it downloads from S3 first. This tests that your S3 credentials and permissions are working correctly.

Why test S3 restore?

Local backups can get wiped out if your server crashes. S3 is your real safety net. You need to know S3 restores work BEFORE an emergency.


Step 9: Schedule Automated Backups

Everything works manually. Now let's make it automatic.

Set Up Cron Job

We'll use cron to run backups daily at 2:00 AM:

# Edit cron jobs as deploy user
crontab -e

If this is your first time running crontab, it might ask you to choose an editor. Pick nano (usually option 1) if you're unsure.

Add this line:

0 2 * * * /opt/strapi-backend/backup-script.sh >> /opt/strapi-backend/backups/backup.log 2>&1

What this does:

  • 0 2 * * * - Run at 2:00 AM every day

  • /opt/strapi-backend/backup-script.sh - Run this script

  • >> /opt/strapi-backend/backups/backup.log - Append output to log file

  • 2>&1 - Redirect errors to the same log file

Save and exit (Ctrl+X, Y, Enter).

Verify Cron Job

# List your cron jobs
crontab -l

You should see the backup job listed.

Check if cron service is running:

sudo systemctl status cron

Should show "active (running)".

Why 2:00 AM?

  • Low traffic time (fewer chances of conflicting with user activity)

  • After midnight (clean date rollover)

  • Before business hours (if something breaks, you'll notice during your workday)

Feel free to adjust to whatever time works for your timezone and usage patterns.


Step 10: Create Monitoring Script

Let's add a simple script to check backup health:

nano /opt/strapi-backend/check-backups.sh

Paste this:

#!/bin/bash
# Quick backup status check script

BACKUP_DIR="/opt/strapi-backend/backups"
S3_BUCKET="your-backup-bucket-name"  # UPDATE THIS!

echo "=== Backup Status Check ==="
echo "Date: $(date)"
echo ""

# Recent local backups
echo "Recent Local Backups (last 7 days):"
find $BACKUP_DIR -name "strapi_backup_*.sql.gz" -mtime -7 -exec ls -lh {} \; | tail -7
echo ""

# Recent S3 backups
echo "Recent S3 Backups (last 5):"
aws s3 ls s3://$S3_BUCKET/backups/ --recursive | grep "\.sql\.gz$" | tail -5
echo ""

# Check backup log for recent activity
echo "Recent Backup Log (last 20 lines):"
tail -20 $BACKUP_DIR/backup.log
echo ""

# Check for errors in recent logs
echo "Recent Errors (if any):"
grep -i "error\|failed\|warning" $BACKUP_DIR/backup.log | tail -5 || echo "No recent errors found"

Update the bucket name and make it executable:

# Update S3_BUCKET in the script
nano /opt/strapi-backend/check-backups.sh

# Make executable
chmod +x /opt/strapi-backend/check-backups.sh

Run it:

cd /opt/strapi-backend
./check-backups.sh

This gives you a quick health check of your backup system. Run it occasionally to make sure backups are happening.


Step 11: Verify Automated Backup (Tomorrow)

The real test is whether the cron job actually runs.

Tomorrow morning, check these things:

# Did the backup run last night?
tail -30 /opt/strapi-backend/backups/backup.log

# Are new backups appearing?
ls -lt /opt/strapi-backend/backups/ | head -5

# Did it upload to S3?
aws s3 ls s3://your-backup-bucket-name/backups/$(date +%Y/%m)/

# Or check manually in AWS console (S3 → your bucket → backups/2025/12/)

# Quick status check
./check-backups.sh

If you see a fresh backup from 2:00 AM, you're golden. The system is running automatically.


Understanding Your Costs

Reality check: A typical Strapi site with 1,000 blog posts, 500 users, and normal metadata would be around 10-20MB. If your database is approaching 100MB, you've either got a very successful site with tons of content, or you might be storing things in PostgreSQL that should live in file storage.

Cost Breakdown (20MB Database Example)

Let's use a typical 20MB database as our example:

Database size: 20MB raw → ~2-4MB compressed (gzip achieves 80-90% compression)

Using 3MB compressed:

Storage costs:

  • First 30 days in Standard-IA: 3MB × $0.0125/GB = $0.00004/month

  • Days 31-120 in Glacier: 3MB × $0.004/GB × 3 months = $0.00004/month

  • Total storage: $0.00008/month

Request costs:

  • 30 PUT requests (daily backups): $0.0003/month

  • 5 GET requests (occasional restores): $0.0002/month

  • Total requests: $0.0005/month

Monthly total: About $0.0006 (less than 1/10th of a penny)

Annual total: About $0.007 (less than one cent per year)

Yeah, it's basically free.

Database Size Reference

Here's what typical database sizes look like for Strapi:

  • 5-20MB: Staging environment with test/demo content

  • 20-50MB: Active staging or small production with real content

  • 50MB+: Successful production site with substantial traffic

Real talk: If your database is consistently over 50MB, you've probably got real users and real traffic. At that point, stop being cheap and get a managed database service like DigitalOcean Managed Database ($15/month) or AWS RDS. The automated backups, monitoring, and not debugging OOM errors at 2 AM are worth way more than the $9/month difference. Your site's success justifies better infrastructure.


What We Built

Let's recap what your backup system now includes:

Automated Protection:

  • Daily backups at 2:00 AM

  • 7 days of local backups (instant access)

  • 120 days of cloud backups (4 months of history)

  • Automatic storage optimization (moves to cheaper Glacier after 30 days)

Reliable Recovery:

  • Tested restore procedures that actually work

  • Handles database constraints properly

  • Safety backups before every restore

  • Can restore from either local or S3

Monitoring and Maintenance:

  • Comprehensive logging of all operations

  • Quick status check script

  • Automatic cleanup of old local backups

  • Integrity verification for every backup

Professional Backup Practices at Budget Cost:

  • Offsite backups (different location than your droplet)

  • Encrypted storage (S3 server-side encryption)

  • Versioning enabled (protects against accidental overwrites)

  • Lifecycle management (automatic cost optimization)

All for about $0.001 to $0.20/month depending on your database size.


When to Upgrade

This backup strategy works great for staging and small production environments. Here's when you might want something more robust:

Upgrade triggers:

  • Database larger than 50MB

  • Compliance requirements (need point-in-time recovery)

  • Multiple databases to backup (current script handles one)

  • Need faster restore times (replication instead of backups)

  • Team needs automated restore testing

Next-level solutions:

  • AWS RDS with automated backups (more expensive but easier)

  • Continuous replication to a standby database

  • Backup validation automation (restore and test automatically)

  • Cross-region replication for disaster recovery

But for a $6/month staging environment? Our current setup is perfect.


What's Next?

We've got automated backups protecting your data. That's the safety net in place.

In Part 5 (the final article), we're building a complete CI/CD pipeline with GitHub Actions:

  • Automatic builds when you push code

  • Security scanning before deployment

  • Manual approval workflow (no accidental deploys)

  • Automatic deployment to staging

  • Rollback procedures if something breaks

  • Integration with our backup system

The CI/CD setup ties everything together. Push to your staging branch, approve the deployment, and watch your staging environment update automatically. It's the polish that makes this whole system feel professional.

After Part 5, you'll have:

  • Containerized Strapi (Part 1)

  • DigitalOcean deployment (Part 2)

  • Production web server (Part 3)

  • Automated backups (Part 4)

  • CI/CD pipeline (Part 5)

That's a complete deployment environment that rivals setups costing 10x more.


Quick Reference

Here are the commands you'll use most often:

Manual Operations:

# Run backup manually
./backup-script.sh

# List available backups
./restore-script.sh list

# Restore from local backup
./restore-script.sh backup_file.sql.gz local

# Restore from S3
./restore-script.sh backup_file.sql.gz s3

# Check backup status
./check-backups.sh

# View backup logs
tail -50 /opt/strapi-backend/backups/backup.log

Monitoring:

# Check recent backups
ls -lh /opt/strapi-backend/backups/

# Check S3 backups
aws s3 ls s3://your-bucket-name/backups/ --recursive

# View cron jobs
crontab -l

# Check cron logs
grep CRON /var/log/syslog | tail -20

Troubleshooting:

# Test AWS credentials
aws s3 ls

# Test backup script manually
./backup-script.sh

# Check container status
docker compose -f docker-compose.stg.yml ps

# Check database access
docker compose -f docker-compose.stg.yml exec strapi-db psql -U postgres -l

Final File Structure:

/opt/strapi-backend/
├── backup-script.sh           # Automated backup to S3
├── restore-script.sh          # Safe restore from local/S3
├── check-backups.sh          # Status monitoring
├── docker-compose.stg.yml    # Your existing setup
├── .env.stg                  # Your existing config
└── backups/
    ├── backup.log            # All backup operations
    ├── strapi_backup_*.sql.gz # Local backups (7 days)
    └── prerestore_*.sql.gz   # Safety backups

Hit any issues setting up backups? Drop a comment with the error message and I'll help you troubleshoot. Next week, we're wrapping up the series with the CI/CD pipeline - the final piece of the puzzle!

Building a Complete Deployment Environment for Strapi v5: A Practical Series

Part 3 of 7

Learn how to build a production-ready, budget-friendly staging environment for Strapi v5 using Docker, DigitalOcean, and modern DevOps practices. Complete with automated backups and CI/CD.

Up next

Setting Up Nginx and SSL: Making Your Strapi Backend Production-Ready

Part 3 of "Building a Complete Deployment Environment for Strapi v5: A Practical Series"