Skip to main content

Command Palette

Search for a command to run...

CI/CD Pipeline Part 2: Automated Deployment with GitHub Actions

Part 5b of "Building a Complete Deployment Environment for Strapi v5: A Practical Series"

Updated
β€’47 min read
CI/CD Pipeline Part 2: Automated Deployment with GitHub Actions

Series Navigation:

New to the series? Start with Part 5a to get the CI pipeline working first - this article builds directly on that foundation.


Alright, we've got automated validation working from Part 5a. Every time you push code, GitHub Actions gives you that green checkmark telling you everything's good to go. Feels nice, right?
But here's what's still manual: actually deploying that validated code to your staging server.

Right now, if you pushed the locally build docker image to GHCR, you're still:

  1. SSHing into your server

  2. Pulling the latest image from GHCR

  3. Updating docker-compose files

  4. Restarting containers

  5. Checking if everything works

  6. Manually rolling back if something breaks

That's fine for deploying once a week. But when you're iterating quickly? This becomes a bottleneck fast. And if you deploy monthly or even less frequently? Good luck remembering all these steps without checking your notes every single time. "Wait, did I update the compose file first or pull the image first? And what was that health check command again?"

In this article, we're completing the automation by building the CD (Continuous Deployment) part of the pipeline. We'll create TWO workflow options so you can choose the approach that fits your team:

Option 1 - Auto-Deploy on Merge (Recommended for teams):

  • Triggers automatically when code is merged to dev

  • Builds Docker image and pushes to GHCR

  • Requires manual approval before deploying

  • Perfect for teams that want safety gates

  • Prevents accidental deployments

Option 2 - Manual-Dispatch Workflow (Great for small teams):

  • Trigger deployment from ANY branch via GitHub workflow UI

  • Trigger deployment from ANY Git Tag via Github workflow UI

  • No approval needed (you're already being intentional)

  • Perfect for testing feature branches in staging

  • Great for solo developers or tight-knit teams

  • Ideal for emergency hotfixes

By the end, you'll have a complete CI/CD pipeline where validated code automatically (or manually) deploys to your staging environment with health checks, rollback capabilities, and clear visibility into what's happening.

Let's build this.


What We're Building

Here's the complete deployment flow for both options:

Auto-Deploy Workflow (staging-deploy.yml):

Merge to dev β†’ Security scan β†’ Build & push to GHCR β†’ Wait for approval 
β†’ Deploy to server β†’ Health check β†’ Success or auto-rollback

Perfect for:

  • Teams with multiple developers

  • When you want review gates before deployment

  • Preventing accidental staging updates

  • Learning proper DevOps practices

Manual-Dispatch Workflow (staging-deploy-manual.yml):

Click "Run workflow" β†’ Select branch β†’ Security scan β†’ Build & push to GHCR 
β†’ Deploy to server β†’ Health check β†’ Success or auto-rollback

Perfect for:

  • Solo developers or small teams (2-3 people)

  • Testing feature branches in staging before merging

  • Emergency hotfixes that need speed

  • When you want full control without approval gates

Both workflows deploy to the same staging environment (your DigitalOcean droplet from Parts 1-4). The only difference is the trigger mechanism and approval process.

Future Extensibility:

While this article focuses on deploying to a single staging environment, these workflows are highly extensible:

  • Multi-environment support: You can create separate workflows for staging, UAT, and production environments

  • Environment selector: The manual workflow can be extended to show a dropdown where you select which environment to deploy to

  • Different approval requirements: Production might require 2 approvers, staging might require 1 or none

  • Environment-specific configurations: Each environment can have different resource limits, environment variables, etc.

We'll stick with a single staging environment for this article to keep things clear, but I'll show you where to add these features when you're ready to scale up.


Prerequisites

Before we start, make sure you have:

  • Part 5a completed (CI pipeline running and working)

  • Parts 1-4 completed (Strapi deployed on DigitalOcean)

  • SSH access to your staging server

  • Admin access to your GitHub repository

  • About 90-120 minutes for complete setup and testing

Quick check - Is your CI working?

If you push code to a feature branch, do you see the workflow run in GitHub Actions? If yes, you're ready. If not, go back to Part 5a and get that working first.


Understanding the Deployment Architecture

Before we dive into configuration, let's understand what happens during deployment:

Current State (Manual Deployment):

Local Machine β†’ SSH to server β†’ Pull image β†’ Update compose β†’ Restart β†’ Hope

After Part 5b (Automated Deployment):

GitHub Actions β†’ Builds image β†’ Pushes to GHCR β†’ SSH to server 
β†’ Server pulls image β†’ Creates backup β†’ Updates compose β†’ Deploys 
β†’ Health check β†’ Success or auto-rollback

Key Components:

  1. GitHub Container Registry (GHCR): Where we store production Docker images

  2. GitHub Secrets: Secure storage for SSH keys and server credentials

  3. GitHub Environments: Approval gates and environment-specific settings

  4. Deployment Script: Lives on your server, handles the actual deployment

  5. Health Checks: Verify deployment succeeded before considering it complete

  6. Rollback Mechanism: Automatically reverts if deployment fails

Why this architecture works:

  • GitHub Actions handles orchestration (building, approval, triggering)

  • Your server handles deployment (it knows its own state best)

  • Clear separation of concerns

  • Easy to debug (logs in both GitHub and on server)

  • Can scale to multiple environments


Step 1: Create SSH Key for Deployment

First, we need a way for GitHub Actions to SSH into your staging server and trigger deployments.

Generate a New SSH Key

On your local machine:

# Create a dedicated SSH key for GitHub Actions
ssh-keygen -t ed25519 -f ~/.ssh/github_actions_staging -N ""

# This creates two files:
# ~/.ssh/github_actions_staging (private key - for GitHub)
# ~/.ssh/github_actions_staging.pub (public key - for server)

Why a separate key?

  • Dedicated key for automation (different from your personal key)

  • Easy to revoke if compromised

  • Clear audit trail of automated vs manual access

  • Follows principle of least privilege

Add Public Key to Your Server

Copy the public key to your server:

# Method 1: Via Root User (Recommended for Fresh Servers)
# Use this if you only have root SSH access (typical for new servers)
cat ~/.ssh/github_actions_staging.pub | ssh root@YOUR_STAGING_SERVER_IP \
  "mkdir -p /home/deploy/.ssh && \
   cat >> /home/deploy/.ssh/authorized_keys && \
   chown -R deploy:deploy /home/deploy/.ssh && \
   chmod 700 /home/deploy/.ssh && \
   chmod 600 /home/deploy/.ssh/authorized_keys"

# Method 2: Direct to Deploy User (If You Already Have Access)
# Use this if you can already SSH as deploy user (password or existing key)
ssh-copy-id -i ~/.ssh/github_actions_staging.pub deploy@YOUR_STAGING_SERVER_IP

Why Method 1 is often needed:

  • Fresh servers typically only allow root SSH access initially

  • Production servers often have password authentication disabled

  • This creates the directory, adds the key, and sets correct permissions in one command

Important: We're adding the key for the deploy user (from Part 2), not root. This maintains proper security practices.

Test the SSH Connection

# Test the new key works
ssh -i ~/.ssh/github_actions_staging deploy@YOUR_STAGING_SERVER_IP

# You should get in without a password prompt
# If it works, exit the server:
exit

Get the Private Key for GitHub

# Display the private key
cat ~/.ssh/github_actions_staging

# Copy the ENTIRE output including:
# -----BEGIN OPENSSH PRIVATE KEY-----
# [all the key content]
# -----END OPENSSH PRIVATE KEY-----

Save this somewhere temporarily - we'll add it to GitHub Secrets in the next step.

Security Note:

This private key is powerful - it grants access to your server. Keep it secure:

  • Don't commit it to Git

  • Don't paste it in Slack or email

  • Don't share it publicly

  • Store it only in GitHub Secrets (which we'll do next)


Step 2: Add GitHub Secrets

GitHub Secrets provide secure storage for sensitive information like SSH keys and server credentials.

Navigate to Repository Secrets

  1. Go to your GitHub repository

  2. Click Settings (top menu)

  3. Click Secrets and variables β†’ Actions (left sidebar)

  4. Click New repository secret

Add These Three Secrets

Secret 1: STAGING_SSH_KEY

  • Name: STAGING_SSH_KEY

  • Value: Paste the entire private key from Step 1

      -----BEGIN OPENSSH PRIVATE KEY-----
      [your entire private key content]
      -----END OPENSSH PRIVATE KEY-----
    
  • Click Add secret

Secret 2: STAGING_HOST

  • Name: STAGING_HOST

  • Value: Your staging server IP address (e.g., 167.99.234.123)

  • Click Add secret

Secret 3: STAGING_USER

  • Name: STAGING_USER

  • Value: deploy

  • Click Add secret

Verify Secrets Are Added

You should now see three secrets listed:

STAGING_SSH_KEY
STAGING_HOST
STAGING_USER

Note: You can't view secret values after creation (security feature). If you made a mistake, delete and recreate the secret.

Why These Specific Names?

The STAGING_ prefix makes it clear these are for staging environment. When you add production later, you'll create PRODUCTION_SSH_KEY, PRODUCTION_HOST, etc. This naming convention prevents accidentally deploying to the wrong environment.


Step 3: Create Deployment Script on Server

Now let's create the script that actually handles deployment on your server. This script will be called by GitHub Actions but runs locally on your server.

Connect to Your Server

ssh deploy@YOUR_STAGING_SERVER_IP
cd /opt/strapi-backend

Create Deployment Scripts Directory

# Create directory for deployment scripts
mkdir -p deployment-scripts
chmod 755 deployment-scripts

Create the Deployment Script

nano deployment-scripts/deploy-staging.sh

Paste this complete script:

#!/bin/bash
# ============================================================================
# Enhanced Deployment Script for Strapi v5 Staging
# Handles: Backup, Pull, Update, Deploy, Health Check, Rollback
# ============================================================================

set -e  # Exit on any error

# ============================================================================
# Configuration - UPDATE THESE TO MATCH YOUR SETUP
# ============================================================================
COMPOSE_FILE="/opt/strapi-backend/docker-compose.stg.yml"
ENV_FILE="/opt/strapi-backend/.env.stg"
BACKUP_DIR="/opt/strapi-backend/backups"
DEPLOYMENT_LOG="/opt/strapi-backend/deployment.log"
DEPLOYMENT_HISTORY="/opt/strapi-backend/deployment-history.txt"

# Database configuration - UPDATE THESE
DATABASE_NAME="strapi_staging"     # ← Change to your database name
DATABASE_USER="postgres"
DATABASE_CONTAINER="strapiDB"      # ← Change to your service name from docker-compose.stg.yml
STRAPI_CONTAINER="strapi-backend"  # ← Change to your service name from docker-compose.stg.yml

# Docker image configuration
DOCKER_REGISTRY="ghcr.io"
GITHUB_USERNAME="your-github-username"   # ← Change to your GitHub username (must be lowercase)
REPO_NAME="your-repo-name"               # ← Change to your repository name (must be lowercase)

# Health check configuration
HEALTH_CHECK_URL="http://localhost:1337/admin"
HEALTH_CHECK_TIMEOUT=45

# ============================================================================
# Functions
# ============================================================================

# Logging function
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$DEPLOYMENT_LOG"
}

# Error handling
error_exit() {
    log "ERROR: $1"
    exit 1
}

# Show usage
show_usage() {
    echo "Usage: $0 <version>"
    echo "Example: $0 v20241208-143052-a1b2c3d"
    echo ""
    echo "Options:"
    echo "  --current    Show currently deployed version"
    echo "  --help       Show this help message"
    exit 1
}

# Get current deployed version
get_current_version() {
    grep "image:" "$COMPOSE_FILE" | grep "$REPO_NAME" | sed 's/.*:\(.*\)/\1/' | head -1
}

# Create pre-deployment backup
create_backup() {
    local version=$1
    local timestamp=$(date +%Y%m%d_%H%M%S)
    local backup_file="$BACKUP_DIR/predeployment_${version}_${timestamp}.sql"

    log "Creating pre-deployment backup..."

    if docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" exec -T "$DATABASE_CONTAINER" \
        pg_dump -U "$DATABASE_USER" -d "$DATABASE_NAME" > "$backup_file" 2>/dev/null; then

        # Compress backup
        gzip "$backup_file"
        log "Backup created: ${backup_file}.gz"
        echo "${backup_file}.gz"
    else
        log "WARNING: Backup creation failed, but continuing deployment"
        echo ""
    fi
}

# Update docker-compose file with new version
update_compose_file() {
    local new_version=$1
    local new_image="${DOCKER_REGISTRY}/${GITHUB_USERNAME}/${REPO_NAME}:${new_version}"

    log "Updating docker-compose.stg.yml with version: $new_version"

    # Create backup of compose file
    cp "$COMPOSE_FILE" "${COMPOSE_FILE}.backup"

    # Update image version
    sed -i "s|image: ${DOCKER_REGISTRY}/${GITHUB_USERNAME}/${REPO_NAME}:.*|image: ${new_image}|g" "$COMPOSE_FILE"

    log "docker-compose.stg.yml updated successfully"
}

# Run health check
health_check() {
    log "Running health checks..."
    local count=0
    local max_attempts=$((HEALTH_CHECK_TIMEOUT))

    while [ $count -lt $max_attempts ]; do
        if curl -sf "$HEALTH_CHECK_URL" > /dev/null 2>&1; then
            log "βœ… Health check passed!"
            return 0
        fi

        count=$((count + 1))
        echo -n "."
        sleep 1
    done

    log "❌ Health check failed after ${HEALTH_CHECK_TIMEOUT} seconds"
    return 1
}

# Rollback to previous version
rollback() {
    log "=========================================="
    log "ROLLING BACK TO PREVIOUS VERSION"
    log "=========================================="

    # Restore backup of compose file
    if [ -f "${COMPOSE_FILE}.backup" ]; then
        mv "${COMPOSE_FILE}.backup" "$COMPOSE_FILE"
        log "Restored previous docker-compose.stg.yml"
    fi

    # Restart with previous version
    log "Restarting with previous version..."
    docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d "$STRAPI_CONTAINER"

    # Wait and check
    sleep 30
    if health_check; then
        log "βœ… Rollback successful"
    else
        log "❌ Rollback health check failed - manual intervention required"
    fi
}

# Record deployment in history
record_deployment() {
    local version=$1
    local status=$2
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')

    echo "${timestamp} | ${version} | ${status}" >> "$DEPLOYMENT_HISTORY"
}

# ============================================================================
# Main Deployment Logic
# ============================================================================

# Handle arguments
if [ "$1" = "--current" ]; then
    echo "Currently deployed version: $(get_current_version)"
    exit 0
elif [ "$1" = "--help" ] || [ -z "$1" ]; then
    show_usage
fi

NEW_VERSION=$1
CURRENT_VERSION=$(get_current_version)

log "=========================================="
log "DEPLOYMENT STARTED"
log "=========================================="
log "Current version: $CURRENT_VERSION"
log "New version: $NEW_VERSION"
log ""

# Step 1: Create backup
BACKUP_FILE=$(create_backup "$NEW_VERSION")

# Step 2: Pull new Docker image
log "Pulling Docker image: $NEW_VERSION"
if docker pull "${DOCKER_REGISTRY}/${GITHUB_USERNAME}/${REPO_NAME}:${NEW_VERSION}"; then
    log "βœ… Docker image pulled successfully"
else
    error_exit "Failed to pull Docker image"
fi

# Step 3: Update docker-compose.stg.yml
update_compose_file "$NEW_VERSION"

# Step 4: Deploy new version (only restart Strapi, not database)
log "Deploying new version..."
log "Stopping Strapi container..."
docker compose -f "$COMPOSE_FILE" stop "$STRAPI_CONTAINER"

log "Starting new version..."
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d "$STRAPI_CONTAINER"

# Step 5: Wait for startup
log "Waiting for Strapi to start..."
sleep 30

# Step 6: Health check
if health_check; then
    log "=========================================="
    log "βœ… DEPLOYMENT SUCCESSFUL"
    log "=========================================="
    log "Version $NEW_VERSION is now live"
    log "Backup available at: $BACKUP_FILE"

    record_deployment "$NEW_VERSION" "SUCCESS"

    # Clean up backup of compose file
    rm -f "${COMPOSE_FILE}.backup"

    exit 0
else
    log "=========================================="
    log "❌ DEPLOYMENT FAILED - INITIATING ROLLBACK"
    log "=========================================="

    record_deployment "$NEW_VERSION" "FAILED_ROLLBACK"

    rollback
    exit 1
fi

Save and exit (Ctrl+X, Y, Enter).

Update Script Configuration

Now customize the script for your setup:

# Edit the script again
nano deployment-scripts/deploy-staging.sh

Update these critical configuration sections:

# Lines 14-16: File paths (usually these are correct as-is)
COMPOSE_FILE="/opt/strapi-backend/docker-compose.stg.yml"
ENV_FILE="/opt/strapi-backend/.env.stg"
BACKUP_DIR="/opt/strapi-backend/backups"

# Lines 19-22: Database configuration - MUST MATCH YOUR SETUP
DATABASE_NAME="strapi_staging"     # Change to YOUR database name (from .env.stg)
DATABASE_USER="postgres"           # Usually "postgres", but verify in .env.stg
DATABASE_CONTAINER="strapiDB"      # Change to YOUR database service name
STRAPI_CONTAINER="strapi-backend"  # Change to YOUR Strapi service name

# Lines 24-27: Docker image configuration
DOCKER_REGISTRY="ghcr.io"                    # Keep as-is for GitHub Container Registry
GITHUB_USERNAME="your-github-username"       # Change to YOUR GitHub username (lowercase)
REPO_NAME="your-repo-name"                   # Change to YOUR repository name (lowercase)

# Lines 29-30: Health check configuration (optional adjustments)
HEALTH_CHECK_URL="http://localhost:1337/admin"  # Keep unless you changed Strapi port
HEALTH_CHECK_TIMEOUT=45                          # Seconds to wait for health check

How to find your configuration values:

# 1. Find your database name
grep "DATABASE_NAME" .env.stg
# Example output: DATABASE_NAME=strapi_staging

# 2. Find your database user
grep "DATABASE_USERNAME" .env.stg
# Example output: DATABASE_USERNAME=postgres

# 3. Find your service names from docker-compose.stg.yml
grep "^  [a-zA-Z]" docker-compose.stg.yml
# Example output:
#   strapi-backend:    ← This is your STRAPI_CONTAINER
#   strapiDB:          ← This is your DATABASE_CONTAINER

# 4. Verify your GitHub username and repo name
# Should match your GHCR image URL: ghcr.io/YOUR_USERNAME/YOUR_REPO

Important Notes:

  • DATABASE_NAME: Must match exactly what's in your .env.stg file

  • DATABASE_USER: Usually "postgres" but check your .env.stg to be sure

  • Service names: Use the service name from docker-compose.stg.yml, NOT the container_name

  • GitHub username/repo: Must be lowercase (GHCR requirement)

Make the script executable:

chmod +x deployment-scripts/deploy-staging.sh

Understanding the Deployment Script

Let's walk through what this production-ready script does:

1. Configuration Section (Lines 8-33):

Everything you need to customize is clearly marked at the top:

  • File paths for compose files, backups, and logs

  • Database configuration (name, user, container)

  • Docker image registry settings

  • Health check parameters

Why this matters: No more hunting through the script to find hardcoded values. All configuration is in one place.

2. Utility Functions:

  • log() - Timestamps and logs every action to deployment.log

  • error_exit() - Logs error and exits cleanly

  • show_usage() - Displays help text when script is used incorrectly

  • get_current_version() - Reads currently deployed version from docker-compose

  • record_deployment() - Tracks deployment history with timestamp, version, and status

3. Deployment Functions:

  • create_backup() - Creates pre-deployment database backup with version-specific naming

  • update_compose_file() - Safely updates docker-compose.stg.yml with new image version

  • health_check() - Verifies Strapi responds correctly after deployment (45-second timeout)

  • rollback() - Automatically reverts to previous version if deployment fails

4. Main Deployment Flow:

1. Validate input (requires version tag)
2. Create pre-deployment database backup
3. Pull new Docker image from GHCR
4. Update docker-compose.stg.yml with new version
5. Stop Strapi container (NOT database - prevents downtime)
6. Start Strapi with new version
7. Wait 30 seconds for startup
8. Run health check (45 attempts)
9. On success: Log success, record to history, clean up
10. On failure: Automatic rollback to previous version

5. Key Safety Features:

  • set -e - Script exits immediately on any error

  • Pre-deployment backup - Always creates backup before touching anything

  • Compose file backup - Saves old docker-compose before modification

  • Selective restart - Only restarts Strapi, not database (faster, less risky)

  • Automatic rollback - If health check fails, reverts automatically

  • Deployment history - Tracks all deployments in deployment-history.txt

6. CLI Features:

# Show currently deployed version (useful for debugging)
./deployment-scripts/deploy-staging.sh --current

# Show help message
./deployment-scripts/deploy-staging.sh --help

# Deploy specific version
./deployment-scripts/deploy-staging.sh v20241208-143052-a1b2c3d

Why this script is production-ready:

  • βœ… Configurable: Works with any database name, container name, or setup

  • βœ… Safe: Automatic backups and rollbacks

  • βœ… Observable: Comprehensive logging and deployment history

  • βœ… Fast: Only restarts Strapi, not the database

  • βœ… Reliable: Health checks verify deployment actually worked

  • βœ… User-friendly: CLI flags make it easy to use and debug

This is the same script running successfully in production environments, not a simplified tutorial version.

Test the Script Manually

Before using it in automation, let's verify it works:

# Check currently deployed version
./deployment-scripts/deploy-staging.sh --current
# Should show: Currently deployed version: v2.0.0-rc1

# Get help
./deployment-scripts/deploy-staging.sh --help

# Test actual deployment (use your current version first)
./deployment-scripts/deploy-staging.sh v2.0.0-rc1

# Watch the logs
tail -f /opt/strapi-backend/deployment.log

Expected output:

==========================================
DEPLOYMENT STARTED
==========================================
Current version: v2.0.0-rc1
New version: v2.0.0-rc1

Creating pre-deployment backup...
Backup created: predeployment_v2.0.0-rc1_20241208_143052.sql.gz
Pulling Docker image: v2.0.0-rc1
βœ… Docker image pulled successfully
Updating docker-compose.stg.yml with version: v2.0.0-rc1
docker-compose.stg.yml updated successfully
Deploying new version...
Stopping Strapi container...
Starting new version...
Waiting for Strapi to start...
Running health checks...
βœ… Health check passed!
==========================================
βœ… DEPLOYMENT SUCCESSFUL
==========================================
Version v2.0.0-rc1 is now live
Backup available at: predeployment_v2.0.0-rc1_20241208_143052.sql.gz

Check deployment history:

cat /opt/strapi-backend/deployment-history.txt
# Should show:
# 2024-12-08 14:30:52 | v2.0.0-rc1 | SUCCESS

If it fails:

Check the deployment log:

tail -50 /opt/strapi-backend/deployment.log

Once the script runs successfully, you're ready to automate it with GitHub Actions!


Step 3.5: Create Rollback Script

While the deployment script has automatic rollback for failed deployments, you also need a manual rollback script for situations like:

  • Bug discovered hours after deployment

  • Need to rollback just the app or just the database

  • Checking deployment status and history

Create the rollback script:

nano deployment-scripts/rollback-staging.sh

Paste this complete script:

#!/bin/bash
# ============================================================================
# Rollback Script for Strapi v5 Staging
# Capabilities: App rollback, Database rollback, Status check
# ============================================================================

set -e

# ============================================================================
# Configuration - MUST MATCH YOUR DEPLOYMENT SCRIPT
# ============================================================================
COMPOSE_FILE="/opt/strapi-backend/docker-compose.stg.yml"
ENV_FILE="/opt/strapi-backend/.env.stg"
BACKUP_DIR="/opt/strapi-backend/backups"
DEPLOYMENT_LOG="/opt/strapi-backend/deployment.log"
DEPLOYMENT_HISTORY="/opt/strapi-backend/deployment-history.txt"

DATABASE_NAME="strapi_staging"     # ← UPDATE: Change to match deployment script
DATABASE_USER="postgres"
DATABASE_CONTAINER="strapiDB"      # ← UPDATE: Change to match deployment script (use service name)
STRAPI_CONTAINER="strapi-backend"  # ← UPDATE: Change to match deployment script (use service name)

DOCKER_REGISTRY="ghcr.io"
GITHUB_USERNAME="your-github-username"   # ← UPDATE: Change to your GitHub username (lowercase)
REPO_NAME="your-repo-name"               # ← UPDATE: Change to your repo name (lowercase)

# ============================================================================
# Functions
# ============================================================================

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}

error_exit() {
    log "ERROR: $1"
    exit 1
}

show_usage() {
    echo "Rollback Script for Strapi v5 Staging"
    echo ""
    echo "Usage:"
    echo "  $0 app [version]           Rollback application to previous or specific version"
    echo "  $0 database <backup_file>  Restore database from backup file"
    echo "  $0 full                    Rollback both app and database to last known good state"
    echo "  $0 status                  Show current status and available rollback versions"
    echo ""
    echo "Examples:"
    echo "  $0 status"
    echo "  $0 app"
    echo "  $0 app v20241208-120000-abc1234"
    echo "  $0 database predeployment_v20241208_143052.sql.gz"
    echo "  $0 full"
    exit 1
}

get_current_version() {
    grep "image:" "$COMPOSE_FILE" | grep "$REPO_NAME" | sed 's/.*:\(.*\)/\1/' | head -1
}

get_previous_version() {
    grep "SUCCESS" "$DEPLOYMENT_HISTORY" | tail -2 | head -1 | awk -F'|' '{print $2}' | tr -d ' '
}

show_status() {
    echo "=========================================="
    echo "CURRENT STATUS"
    echo "=========================================="
    echo ""

    echo "Current Version:"
    echo "  $(get_current_version)"
    echo ""

    echo "Container Status:"
    docker compose -f "$COMPOSE_FILE" ps
    echo ""

    echo "Recent Deployment History:"
    tail -5 "$DEPLOYMENT_HISTORY" 2>/dev/null || echo "  No deployment history found"
    echo ""

    echo "Available Rollback Versions:"
    grep "SUCCESS" "$DEPLOYMENT_HISTORY" | tail -5 | awk -F'|' '{print "  "$2}' || echo "  No successful deployments found"
    echo ""

    echo "Available Database Backups:"
    ls -lht "$BACKUP_DIR"/*.sql.gz 2>/dev/null | head -5 || echo "  No backups found"
    echo ""
}

rollback_app() {
    local target_version=$1

    if [ -z "$target_version" ]; then
        target_version=$(get_previous_version)
        if [ -z "$target_version" ]; then
            error_exit "No previous version found in deployment history"
        fi
        log "No version specified, rolling back to previous version: $target_version"
    else
        log "Rolling back to specified version: $target_version"
    fi

    local current_version=$(get_current_version)

    if [ "$current_version" = "$target_version" ]; then
        log "Already on version $target_version"
        exit 0
    fi

    echo ""
    echo "=========================================="
    echo "APPLICATION ROLLBACK"
    echo "=========================================="
    echo "Current: $current_version"
    echo "Target:  $target_version"
    echo ""
    read -p "Continue with rollback? (y/N): " confirm

    if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
        log "Rollback cancelled"
        exit 0
    fi

    log "Starting application rollback..."

    # Pull target version
    log "Pulling Docker image for version: $target_version"
    docker pull "${DOCKER_REGISTRY}/${GITHUB_USERNAME}/${REPO_NAME}:${target_version}"

    # Update docker-compose
    log "Updating docker-compose.stg.yml..."
    sed -i "s|image: ${DOCKER_REGISTRY}/${GITHUB_USERNAME}/${REPO_NAME}:.*|image: ${DOCKER_REGISTRY}/${GITHUB_USERNAME}/${REPO_NAME}:${target_version}|g" "$COMPOSE_FILE"

    # Restart Strapi
    log "Restarting Strapi with version: $target_version"
    docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d "$STRAPI_CONTAINER"

    log "Waiting for startup..."
    sleep 30

    # Health check
    if curl -sf http://localhost:1337/admin > /dev/null 2>&1; then
        log "βœ… Rollback successful!"
        log "Now running version: $target_version"
    else
        log "⚠️  WARNING: Health check failed"
        log "Check application logs: docker compose -f $COMPOSE_FILE logs $STRAPI_CONTAINER"
    fi
}

rollback_database() {
    local backup_file=$1

    if [ -z "$backup_file" ]; then
        error_exit "Backup file not specified. Usage: $0 database <backup_file>"
    fi

    local full_path="$BACKUP_DIR/$backup_file"

    if [ ! -f "$full_path" ]; then
        error_exit "Backup file not found: $full_path"
    fi

    echo ""
    echo "=========================================="
    echo "DATABASE ROLLBACK"
    echo "=========================================="
    echo "⚠️  WARNING: This will replace ALL database data!"
    echo "Backup file: $backup_file"
    echo ""
    read -p "Continue with database restore? (y/N): " confirm

    if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
        log "Database rollback cancelled"
        exit 0
    fi

    log "Starting database rollback..."

    # Create safety backup
    log "Creating safety backup..."
    local safety_backup="safety_$(date +%Y%m%d_%H%M%S).sql"
    docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" exec -T "$DATABASE_CONTAINER" \
        pg_dump -U "$DATABASE_USER" -d "$DATABASE_NAME" > "$BACKUP_DIR/$safety_backup" 2>/dev/null
    gzip "$BACKUP_DIR/$safety_backup"
    log "Safety backup created: ${safety_backup}.gz"

    # Stop Strapi
    log "Stopping Strapi..."
    docker compose -f "$COMPOSE_FILE" stop "$STRAPI_CONTAINER"

    # Drop and recreate database
    log "Preparing database..."
    docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" exec -T "$DATABASE_CONTAINER" \
        psql -U "$DATABASE_USER" -c "DROP DATABASE IF EXISTS \"$DATABASE_NAME\";"
    docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" exec -T "$DATABASE_CONTAINER" \
        psql -U "$DATABASE_USER" -c "CREATE DATABASE \"$DATABASE_NAME\";"

    # Restore backup
    log "Restoring database from backup..."
    if [[ "$backup_file" == *.gz ]]; then
        gunzip -c "$full_path" | docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" exec -T "$DATABASE_CONTAINER" \
            psql -U "$DATABASE_USER" -d "$DATABASE_NAME"
    else
        cat "$full_path" | docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" exec -T "$DATABASE_CONTAINER" \
            psql -U "$DATABASE_USER" -d "$DATABASE_NAME"
    fi

    # Start Strapi
    log "Starting Strapi..."
    docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d "$STRAPI_CONTAINER"

    log "Waiting for startup..."
    sleep 30

    if curl -sf http://localhost:1337/admin > /dev/null 2>&1; then
        log "βœ… Database rollback successful!"
        log "Safety backup available at: ${safety_backup}.gz"
    else
        log "⚠️  WARNING: Health check failed"
    fi
}

rollback_full() {
    echo ""
    echo "=========================================="
    echo "FULL ROLLBACK"
    echo "=========================================="
    echo "⚠️  This will rollback BOTH application and database"
    echo "to the last known good state."
    echo ""
    read -p "Continue with full rollback? (y/N): " confirm

    if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
        log "Full rollback cancelled"
        exit 0
    fi

    # Get last successful deployment
    local last_good_version=$(get_previous_version)
    if [ -z "$last_good_version" ]; then
        error_exit "No previous successful deployment found"
    fi

    # Find corresponding backup
    local backup_file=$(ls -t "$BACKUP_DIR"/predeployment_${last_good_version}_*.sql.gz 2>/dev/null | head -1)
    if [ -z "$backup_file" ]; then
        log "WARNING: No backup found for version $last_good_version"
        log "Rolling back application only..."
        rollback_app "$last_good_version"
    else
        log "Found backup: $(basename $backup_file)"
        log "Rolling back to version: $last_good_version"

        # Rollback database first
        rollback_database "$(basename $backup_file)"

        # Then rollback app
        rollback_app "$last_good_version"
    fi

    log "βœ… Full rollback completed"
}

# ============================================================================
# Main Logic
# ============================================================================

if [ $# -eq 0 ]; then
    show_usage
fi

case "$1" in
    status)
        show_status
        ;;
    app)
        rollback_app "$2"
        ;;
    database)
        rollback_database "$2"
        ;;
    full)
        rollback_full
        ;;
    *)
        show_usage
        ;;
esac

Save and exit, then make it executable:

chmod +x deployment-scripts/rollback-staging.sh

Update Rollback Configuration

Update these variables to match your deployment script:

nano deployment-scripts/rollback-staging.sh

Find and update:

DATABASE_NAME="strapi_staging"     # Must match .env.stg
GITHUB_USERNAME="your-github-username"   # Your GitHub username (lowercase)
REPO_NAME="your-repo-name"               # Your repository name (lowercase)

How to find your values:

# Database name
grep DATABASE_NAME .env.stg

# Check docker-compose for service names
grep "container_name:" docker-compose.stg.yml

Test the Rollback Script

Check current status:

./deployment-scripts/rollback-staging.sh status

Expected output:

==========================================
CURRENT STATUS
==========================================

Current Version:
  v20241208-120000-abc1234

Container Status:
NAME              IMAGE                                    STATUS
strapi-backend    ghcr.io/you/your-repo:v20241208...      Up
strapiDB          postgres:16-alpine                       Up

Recent Deployment History:
2024-12-08 12:00:15 | v20241208-120000-abc1234 | SUCCESS
2024-12-08 14:30:22 | v20241208-143000-def5678 | SUCCESS

Available Rollback Versions:
  v20241208-120000-abc1234
  v20241208-143000-def5678

Available Database Backups:
-rw-r--r-- 1 deploy deploy 4.2M Dec  8 14:30 predeployment_v20241208_143052.sql.gz
-rw-r--r-- 1 deploy deploy 4.1M Dec  8 12:00 predeployment_v20241208_120015.sql.gz

Rollback Commands

Rollback app to previous version:

./deployment-scripts/rollback-staging.sh app

Rollback app to specific version:

./deployment-scripts/rollback-staging.sh app v20241208-120000-abc1234

Rollback database only:

./deployment-scripts/rollback-staging.sh database predeployment_v20241208_143052.sql.gz

Full rollback (app + database):

./deployment-scripts/rollback-staging.sh full

Why this script is important:

The deployment script has automatic rollback for failed deployments, but this separate rollback script is essential for:

  • Manual rollbacks when bugs are discovered hours/days later

  • Selective rollbacks - app-only or database-only

  • Status checking - see what's deployed and available rollback options

  • Emergency recovery - quick access to all rollback capabilities


Step 4: Create Auto-Deploy Workflow (Primary Option)

This workflow automatically deploys when code is merged to dev but requires manual approval before deployment actually happens.

Create the Workflow File

On your local machine, in your project:

# Make sure .github/workflows directory exists
mkdir -p .github/workflows

# Create the auto-deploy workflow
nano .github/workflows/staging-deploy.yml

Paste this complete workflow:

name: πŸš€ Deploy to Staging (Auto)

# ═══════════════════════════════════════════════════════════════════
# Triggers: Automatically on merge to dev branch
# ═══════════════════════════════════════════════════════════════════
on:
  push:
    branches:
      - dev

# ═══════════════════════════════════════════════════════════════════
# Configuration
# ═══════════════════════════════════════════════════════════════════
env:
  NODE_VERSION: '20'
  REGISTRY: 'ghcr.io'
  APPROVERS: 'your-github-username'  # UPDATE: Change to your GitHub username

permissions:
  contents: read
  packages: write
  issues: write        # Required for approval action
  pull-requests: write # Required for approval action

jobs:
  # ═══════════════════════════════════════════════════════════════════
  # JOB 0: Setup - Convert variables to lowercase
  # ═══════════════════════════════════════════════════════════════════
  setup:
    name: βš™οΈ Setup
    runs-on: ubuntu-latest
    outputs:
      repository_owner: ${{ steps.lowercase.outputs.repository_owner }}
      repository_name: ${{ steps.lowercase.outputs.repository_name }}
      image_registry: ${{ steps.lowercase.outputs.image_registry }}
      approvers: ${{ steps.lowercase.outputs.approvers }}

    steps:
      - name: πŸ”„ Convert to lowercase
        id: lowercase
        run: |
          # Docker registry requires lowercase
          echo "repository_owner=$(echo '${{ github.repository_owner }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
          echo "repository_name=$(echo '${{ github.event.repository.name }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
          echo "image_registry=$(echo '${{ env.REGISTRY }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
          echo "approvers=${{ env.APPROVERS }}" >> $GITHUB_OUTPUT

          echo "## βš™οΈ Configuration" >> $GITHUB_STEP_SUMMARY
          echo "**Registry:** \`${{ env.REGISTRY }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Repository:** \`${{ github.repository }}\`" >> $GITHUB_STEP_SUMMARY

  # ═══════════════════════════════════════════════════════════════════
  # JOB 1: Security Scan
  # ═══════════════════════════════════════════════════════════════════
  security-scan:
    name: πŸ”’ Security Scan
    runs-on: ubuntu-latest
    needs: [setup]

    steps:
      - name: πŸ“₯ Checkout code
        uses: actions/checkout@v4

      - name: πŸ”§ Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - name: πŸ“¦ Install dependencies
        run: npm ci

      - name: πŸ” Run security audit
        run: |
          echo "## πŸ”’ Security Scan Results" >> $GITHUB_STEP_SUMMARY
          if npm audit --audit-level=moderate; then
            echo "βœ… No security vulnerabilities found!" >> $GITHUB_STEP_SUMMARY
          else
            echo "⚠️  Security vulnerabilities detected" >> $GITHUB_STEP_SUMMARY
          fi
        continue-on-error: true

  # ═══════════════════════════════════════════════════════════════════
  # JOB 2: Build and Push Docker Image
  # ═══════════════════════════════════════════════════════════════════
  build:
    name: πŸ—οΈ Build & Push
    runs-on: ubuntu-latest
    needs: [setup, security-scan]
    outputs:
      VERSION: ${{ steps.generate-version.outputs.VERSION }}
      IMAGE_TAG: ${{ steps.generate-version.outputs.IMAGE_TAG }}

    steps:
      - name: πŸ“₯ Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Full history for git hash

      - name: πŸ”’ Generate version tag
        id: generate-version
        run: |
          # Auto-generate: vYYYYMMDD-HHMMSS-commithash
          VERSION="v$(date +'%Y%m%d-%H%M%S')-$(git rev-parse --short HEAD)"
          IMAGE_TAG="${{ needs.setup.outputs.image_registry }}/${{ needs.setup.outputs.repository_owner }}/${{ needs.setup.outputs.repository_name }}:$VERSION"

          echo "VERSION=$VERSION" >> $GITHUB_OUTPUT
          echo "IMAGE_TAG=$IMAGE_TAG" >> $GITHUB_OUTPUT

          echo "## πŸ“¦ Build Information" >> $GITHUB_STEP_SUMMARY
          echo "**Version:** \`$VERSION\`" >> $GITHUB_STEP_SUMMARY
          echo "**Image:** \`$IMAGE_TAG\`" >> $GITHUB_STEP_SUMMARY
          echo "**Commit:** \`${{ github.sha }}\`" >> $GITHUB_STEP_SUMMARY

      - name: πŸ” Login to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ needs.setup.outputs.image_registry }}
          username: ${{ needs.setup.outputs.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: πŸ› οΈ Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: 🐳 Build and push Docker image
        uses: docker/build-push-action@v6
        with:
          context: .
          file: ./Dockerfile.prod
          push: true
          tags: ${{ steps.generate-version.outputs.IMAGE_TAG }}
          platforms: linux/amd64
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: βœ… Build complete
        run: |
          echo "## βœ… Build Successful" >> $GITHUB_STEP_SUMMARY
          echo "Image pushed to GHCR!" >> $GITHUB_STEP_SUMMARY

  # ═══════════════════════════════════════════════════════════════════
  # JOB 3: Manual Approval Gate (Free Tier Compatible)
  # ═══════════════════════════════════════════════════════════════════
  approval:
    name: ⏸️ Wait for Approval
    runs-on: ubuntu-latest
    needs: [setup, security-scan, build]
    timeout-minutes: 360  # 6-hour timeout

    steps:
      - name: πŸ“ Request deployment approval
        uses: trstringer/manual-approval@v1
        timeout-minutes: 360
        with:
          secret: ${{ github.TOKEN }}
          approvers: ${{ needs.setup.outputs.approvers }}
          minimum-approvals: 1
          issue-title: "πŸš€ Deploy ${{ needs.build.outputs.VERSION }} to Staging?"
          issue-body: |
            ## Deployment Approval Required

            **Version:** `${{ needs.build.outputs.VERSION }}`
            **Image:** `${{ needs.build.outputs.IMAGE_TAG }}`
            **Branch:** `${{ github.ref_name }}`
            **Commit:** `${{ github.sha }}`
            **Triggered by:** @${{ github.actor }}

            ---

            ### πŸ“‹ Pre-Deployment Checklist
            - [ ] Security scan passed
            - [ ] Docker image built successfully
            - [ ] Ready to deploy to staging

            ---

            **To approve:** Comment `/approve` or `approved` or `lgtm`
            **To deny:** Comment `/deny` or `denied`

            *This approval will auto-cancel in 6 hours if not responded to.*

  # ═══════════════════════════════════════════════════════════════════
  # JOB 4: Deploy to Staging Server
  # ═══════════════════════════════════════════════════════════════════
  deploy:
    name: πŸš€ Deploy to Staging
    runs-on: ubuntu-latest
    needs: [setup, build, approval]  # Waits for approval
    # Note: No 'environment: staging' - using issue-based approval instead

    steps:
      - name: πŸ“ Deployment started
        run: |
          echo "## πŸš€ Deploying to Staging" >> $GITHUB_STEP_SUMMARY
          echo "**Version:** \`${{ needs.build.outputs.VERSION }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Image:** \`${{ needs.build.outputs.IMAGE_TAG }}\`" >> $GITHUB_STEP_SUMMARY

      - name: πŸ” Setup SSH
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.STAGING_SSH_KEY }}" > ~/.ssh/staging_key
          chmod 600 ~/.ssh/staging_key
          ssh-keyscan -H ${{ secrets.STAGING_HOST }} >> ~/.ssh/known_hosts

      - name: πŸš€ Deploy to server
        env:
          VERSION: ${{ needs.build.outputs.VERSION }}
        run: |
          ssh -i ~/.ssh/staging_key -o StrictHostKeyChecking=no \
            ${{ secrets.STAGING_USER }}@${{ secrets.STAGING_HOST }} << ENDSSH
            cd /opt/strapi-backend

            echo "πŸš€ Starting deployment of version: $VERSION"

            # Run deployment script
            ./deployment-scripts/deploy-staging.sh $VERSION

            # Capture exit code
            exit_code=\$?
            if [ \$exit_code -ne 0 ]; then
              echo "❌ Deployment failed with exit code: \$exit_code"
              exit \$exit_code
            fi

            echo "βœ… Deployment completed successfully"
          ENDSSH

      - name: πŸ₯ Health check
        run: |
          echo "Waiting 30 seconds for application..."
          sleep 30

          max_attempts=5
          attempt=1

          while [ $attempt -le $max_attempts ]; do
            echo "Health check attempt $attempt/$max_attempts..."

            if ssh -i ~/.ssh/staging_key ${{ secrets.STAGING_USER }}@${{ secrets.STAGING_HOST }} \
              'curl -f -s http://localhost:1337/admin > /dev/null'; then
              echo "βœ… Health check passed!"
              echo "## βœ… Deployment Successful" >> $GITHUB_STEP_SUMMARY
              echo "Application is healthy!" >> $GITHUB_STEP_SUMMARY
              exit 0
            fi

            sleep 10
            attempt=$((attempt + 1))
          done

          echo "❌ Health check failed after $max_attempts attempts"
          exit 1

      - name: 🧹 Cleanup
        if: always()
        run: rm -f ~/.ssh/staging_key

      - name: πŸŽ‰ Deployment summary
        if: success()
        run: |
          echo "## πŸŽ‰ Deployment Complete" >> $GITHUB_STEP_SUMMARY
          echo "**Version:** \`${{ needs.build.outputs.VERSION }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Status:** βœ… Success" >> $GITHUB_STEP_SUMMARY
          echo "**URL:** https://api.yourdomain.com" >> $GITHUB_STEP_SUMMARY

Save and exit.

Understanding the Auto-Deploy Workflow

Let's break down the key parts:

Trigger Configuration:

on:
  push:
    branches:
      - dev

This workflow runs automatically when code is pushed to (or merged into) the dev branch. Every merge to dev triggers a deployment attempt.

Setup Job (Lowercase Conversion):

setup:
  steps:
    - name: πŸ”„ Convert to lowercase
      run: |
        echo "repository_owner=$(echo '${{ github.repository_owner }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT

Why we need this:

Docker registries require lowercase names, but GitHub usernames and repo names can have uppercase letters. This job converts everything to lowercase and makes it available to other jobs via outputs.

Without this conversion:

  • GitHub username: YourGitHubUsername

  • Docker tries: ghcr.io/YourGitHubUsername/repo β†’ FAILS (uppercase not allowed)

  • With conversion: ghcr.io/your-github-username/repo β†’ WORKS

Version Generation:

- name: πŸ”’ Generate version tag
  run: |
    VERSION="v$(date +'%Y%m%d-%H%M%S')-$(git rev-parse --short HEAD)"

Format: v20241215-143052-a7f3d2c

This creates a unique version for every deployment:

  • 20241215 - Date (December 15, 2024)

  • 143052 - Time (14:30:52)

  • a7f3d2c - Git commit hash (short)

Why this format?

  • Chronological sorting (newest versions sort last)

  • Includes timestamp (know exactly when it was built)

  • Includes commit hash (trace back to exact code)

  • Automatically unique (no version conflicts)

Approval Gate (Issue-Based - Free Tier Compatible):

approval:
  name: ⏸️ Wait for Approval
  steps:
    - uses: trstringer/manual-approval@v1
      with:
        approvers: ${{ needs.setup.outputs.approvers }}
        minimum-approvals: 1

How the approval workflow works:

  1. Workflow pauses after the build completes

  2. GitHub Issue is automatically created with deployment details

  3. Approvers get notified (anyone watching the repo sees the issue)

  4. Approvers comment /approve, approved, or lgtm on the issue

  5. Workflow continues to deployment after approval

  6. Auto-cancels after 6 hours if no response

This approval method works on GitHub Free tier because it uses Issues (available on all tiers) instead of Environment protection rules (which require GitHub Pro/Team for required reviewers).

deploy:
  needs: [setup, build, approval]  # Waits for approval before running
  # Note: No 'environment: staging' - using issue-based approval instead

The deploy job won't start until the approval job completes successfully.

We'll show you how to configure Environment-based approvals (for Pro/Team users) in Step 6.


Step 5: Create Manual-Dispatch Workflow (Bonus Option)

This workflow can be triggered manually from any branch, giving you full control over what gets deployed and when.

Create the Workflow File

# On your local machine
nano .github/workflows/staging-deploy-manual.yml

Paste this complete workflow:

name: πŸš€ Deploy to Staging (Manual)

# ═══════════════════════════════════════════════════════════════════
# Triggers: Manual dispatch from any branch
# ═══════════════════════════════════════════════════════════════════
on:
  workflow_dispatch:
    inputs:
      version:
        description: 'Version tag (leave blank for auto: vYYYYMMDD-HHMMSS-hash)'
        required: false
        type: string

# ═══════════════════════════════════════════════════════════════════
# Configuration
# ═══════════════════════════════════════════════════════════════════
env:
  NODE_VERSION: '20'
  REGISTRY: 'ghcr.io'

permissions:
  contents: read
  packages: write

jobs:
  # ═══════════════════════════════════════════════════════════════════
  # JOB 0: Setup - Convert variables to lowercase
  # ═══════════════════════════════════════════════════════════════════
  setup:
    name: βš™οΈ Setup
    runs-on: ubuntu-latest
    outputs:
      repository_owner: ${{ steps.lowercase.outputs.repository_owner }}
      repository_name: ${{ steps.lowercase.outputs.repository_name }}
      image_registry: ${{ steps.lowercase.outputs.image_registry }}

    steps:
      - name: πŸ”„ Convert to lowercase
        id: lowercase
        run: |
          echo "repository_owner=$(echo '${{ github.repository_owner }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
          echo "repository_name=$(echo '${{ github.event.repository.name }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
          echo "image_registry=$(echo '${{ env.REGISTRY }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT

          echo "## βš™οΈ Configuration" >> $GITHUB_STEP_SUMMARY
          echo "**Registry:** \`${{ env.REGISTRY }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Repository:** \`${{ github.repository }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Branch:** \`${{ github.ref_name }}\`" >> $GITHUB_STEP_SUMMARY

  # ═══════════════════════════════════════════════════════════════════
  # JOB 1: Security Scan
  # ═══════════════════════════════════════════════════════════════════
  security-scan:
    name: πŸ”’ Security Scan
    runs-on: ubuntu-latest
    needs: [setup]

    steps:
      - name: πŸ“₯ Checkout code
        uses: actions/checkout@v4

      - name: πŸ”§ Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - name: πŸ“¦ Install dependencies
        run: npm ci

      - name: πŸ” Run security audit
        run: |
          echo "## πŸ”’ Security Scan Results" >> $GITHUB_STEP_SUMMARY
          if npm audit --audit-level=moderate; then
            echo "βœ… No security vulnerabilities found!" >> $GITHUB_STEP_SUMMARY
          else
            echo "⚠️  Security vulnerabilities detected" >> $GITHUB_STEP_SUMMARY
          fi
        continue-on-error: true

  # ═══════════════════════════════════════════════════════════════════
  # JOB 2: Build and Push Docker Image
  # ═══════════════════════════════════════════════════════════════════
  build:
    name: πŸ—οΈ Build & Push
    runs-on: ubuntu-latest
    needs: [setup, security-scan]
    outputs:
      VERSION: ${{ steps.generate-version.outputs.VERSION }}
      IMAGE_TAG: ${{ steps.generate-version.outputs.IMAGE_TAG }}

    steps:
      - name: πŸ“₯ Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: πŸ”’ Generate version tag
        id: generate-version
        run: |
          if [ -z "${{ inputs.version }}" ]; then
            # Auto-generate version
            VERSION="v$(date +'%Y%m%d-%H%M%S')-$(git rev-parse --short HEAD)"
          else
            # Use provided version
            VERSION="${{ inputs.version }}"
          fi

          IMAGE_TAG="${{ needs.setup.outputs.image_registry }}/${{ needs.setup.outputs.repository_owner }}/${{ needs.setup.outputs.repository_name }}:$VERSION"

          echo "VERSION=$VERSION" >> $GITHUB_OUTPUT
          echo "IMAGE_TAG=$IMAGE_TAG" >> $GITHUB_OUTPUT

          echo "## πŸ“¦ Build Information" >> $GITHUB_STEP_SUMMARY
          echo "**Version:** \`$VERSION\`" >> $GITHUB_STEP_SUMMARY
          echo "**Image:** \`$IMAGE_TAG\`" >> $GITHUB_STEP_SUMMARY
          echo "**Branch:** \`${{ github.ref_name }}\`" >> $GITHUB_STEP_SUMMARY

      - name: πŸ” Login to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ needs.setup.outputs.image_registry }}
          username: ${{ needs.setup.outputs.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: πŸ› οΈ Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: 🐳 Build and push Docker image
        uses: docker/build-push-action@v6
        with:
          context: .
          file: ./Dockerfile.prod
          push: true
          tags: ${{ steps.generate-version.outputs.IMAGE_TAG }}
          platforms: linux/amd64
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: βœ… Build complete
        run: |
          echo "## βœ… Build Successful" >> $GITHUB_STEP_SUMMARY
          echo "Image pushed to GHCR!" >> $GITHUB_STEP_SUMMARY

  # ═══════════════════════════════════════════════════════════════════
  # JOB 3: Deploy to Staging (No Approval Required)
  # ═══════════════════════════════════════════════════════════════════
  deploy:
    name: πŸš€ Deploy to Staging
    runs-on: ubuntu-latest
    needs: [setup, build]
    # Note: No 'environment' setting = no approval required

    steps:
      - name: πŸ“ Deployment started
        run: |
          echo "## πŸš€ Deploying to Staging" >> $GITHUB_STEP_SUMMARY
          echo "**Version:** \`${{ needs.build.outputs.VERSION }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Branch:** \`${{ github.ref_name }}\`" >> $GITHUB_STEP_SUMMARY

      - name: πŸ” Setup SSH
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.STAGING_SSH_KEY }}" > ~/.ssh/staging_key
          chmod 600 ~/.ssh/staging_key
          ssh-keyscan -H ${{ secrets.STAGING_HOST }} >> ~/.ssh/known_hosts

      - name: πŸš€ Deploy to server
        env:
          VERSION: ${{ needs.build.outputs.VERSION }}
        run: |
          ssh -i ~/.ssh/staging_key -o StrictHostKeyChecking=no \
            ${{ secrets.STAGING_USER }}@${{ secrets.STAGING_HOST }} << ENDSSH
            cd /opt/strapi-backend

            echo "πŸš€ Starting deployment of version: $VERSION"

            # Run deployment script
            ./deployment-scripts/deploy-staging.sh $VERSION

            exit_code=\$?
            if [ \$exit_code -ne 0 ]; then
              echo "❌ Deployment failed"
              exit \$exit_code
            fi

            echo "βœ… Deployment completed"
          ENDSSH

      - name: πŸ₯ Health check
        run: |
          echo "Waiting 30 seconds..."
          sleep 30

          max_attempts=5
          attempt=1

          while [ $attempt -le $max_attempts ]; do
            if ssh -i ~/.ssh/staging_key ${{ secrets.STAGING_USER }}@${{ secrets.STAGING_HOST }} \
              'curl -f -s http://localhost:1337/admin > /dev/null'; then
              echo "βœ… Health check passed!"
              exit 0
            fi
            sleep 10
            attempt=$((attempt + 1))
          done

          echo "❌ Health check failed"
          exit 1

      - name: 🧹 Cleanup
        if: always()
        run: rm -f ~/.ssh/staging_key

      - name: πŸŽ‰ Deployment summary
        if: success()
        run: |
          echo "## πŸŽ‰ Deployment Complete" >> $GITHUB_STEP_SUMMARY
          echo "**Version:** \`${{ needs.build.outputs.VERSION }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Branch:** \`${{ github.ref_name }}\`" >> $GITHUB_STEP_SUMMARY
          echo "**Status:** βœ… Success" >> $GITHUB_STEP_SUMMARY

Save and exit.

Key Differences from Auto-Deploy:

1. Trigger Mechanism:

on:
  workflow_dispatch:
    inputs:
      version:
        description: 'Version tag (leave blank for auto)'
        required: false
  • workflow_dispatch means "run when someone clicks the button"

  • inputs provides optional version override

  • Can be triggered from ANY branch (not just dev)

2. No Approval Required:

deploy:
  name: πŸš€ Deploy to Staging
  # Note: No 'environment: staging' line

The absence of environment: staging means the workflow runs immediately after the build completes. No approval gate.

Why skip approval for manual workflows?

You're already being intentional by clicking "Run workflow" and selecting a branch. Adding an approval gate on top of that is redundant. The manual action itself is the approval.

3. Optional Version Input:

if [ -z "${{ inputs.version }}" ]; then
  VERSION="v$(date +'%Y%m%d-%H%M%S')-$(git rev-parse --short HEAD)"
else
  VERSION="${{ inputs.version }}"
fi

You can either:

  • Leave version blank β†’ Auto-generates v20241215-143052-a7f3d2c

  • Provide custom version β†’ Uses v2.0.0-beta1 or whatever you enter

This is useful when you want meaningful version names for specific releases.


Step 6: Configure Deployment Approval

The auto-deploy workflow includes a manual approval gate to prevent accidental deployments. There are two ways to set this up, depending on your GitHub plan.

Understanding GitHub Free Tier Limitations

GitHub Free tier:

  • βœ… Environments exist and work

  • βœ… Can create "staging" environment

  • ❌ Cannot add "Required reviewers" to environments

  • ❌ Environment protection rules require GitHub Pro or Team

GitHub Pro/Team:

  • βœ… Everything Free tier has

  • βœ… Can add Required reviewers

  • βœ… Full environment protection rules

  • βœ… More sophisticated approval workflows

Since this is a $6/month series, most readers will be on GitHub Free tier. We'll show you both approaches.


This is what the workflow already uses. The approval happens through GitHub Issues, which work on all GitHub plans.

How It Works:

  1. Workflow pauses after build completes

  2. GitHub Issue is created automatically with title like "πŸš€ Deploy v20241215-143052 to Staging?"

  3. Issue includes deployment details: version, branch, commit, who triggered it

  4. Approver comments on the issue: /approve, approved, or lgtm

  5. Workflow continues to deployment after approval received

  6. Auto-cancels after 6 hours if no response

  7. Issue closes automatically after workflow completes

Required Permissions (Already Configured):

The workflow already has these permissions set:

permissions:
  contents: read
  packages: write
  issues: write        # Required for creating approval issues
  pull-requests: write # Required for approval action

Configure Approvers:

Update the approvers list in the workflow:

env:
  NODE_VERSION: '20'
  REGISTRY: 'ghcr.io'
  APPROVERS: 'your-github-username'  # UPDATE THIS!

To add multiple approvers:

APPROVERS: 'alice,bob,charlie'  # Comma-separated, no spaces

That's it! The issue-based approval is already configured and will work immediately.


Approach 2: Environment-Based Approval (Requires GitHub Pro/Team)

If you have GitHub Pro or Team (or Enterprise), you can use Environment protection rules for a more integrated approval experience.

Step 1: Create the Staging Environment

  1. Go to your GitHub repository

  2. Click Settings (top menu)

  3. Click Environments (left sidebar)

  4. Click New environment

  5. Name: staging

  6. Click Configure environment

Step 2: Configure Environment Protection Rules

On the staging environment configuration page:

1. Required reviewers:

  • Check βœ… "Required reviewers"

  • Add yourself (or team members who can approve deployments)

  • You can add up to 6 reviewers

  • Deployment requires approval from at least 1 reviewer

2. Wait timer:

  • Leave unchecked (no delay needed for staging)

  • This feature is useful for production (e.g., "wait 5 minutes before deploying")

3. Deployment branches:

  • Select "Selected branches"

  • Click "Add deployment branch rule"

  • Pattern: dev

  • This ensures only the dev branch can deploy to staging

Click Save protection rules.

Step 3: Modify the Workflow

Replace the approval job with the environment setting:

# Remove the entire approval job (lines with trstringer/manual-approval)
# Instead, add environment to the deploy job:

deploy:
  name: πŸš€ Deploy to Staging
  runs-on: ubuntu-latest
  needs: [setup, build]  # Remove 'approval' from needs
  environment: staging   # Add this line

Also remove these permissions (not needed for environment-based):

# Remove these lines:
  issues: write
  pull-requests: write

How Environment-Based Approval Works:

  1. Workflow triggers (merge to dev)

  2. Security scan passes

  3. Docker image builds and pushes to GHCR

  4. Workflow pauses at deploy job (because of environment: staging)

  5. GitHub shows "Waiting for review" status

  6. Reviewer gets notification

  7. Reviewer clicks "Review pending deployments" in Actions tab

  8. Reviewer approves or rejects

  9. If approved β†’ Deployment continues

  10. If rejected β†’ Deployment cancels

Advantages over Issue-based:

  • Integrated into GitHub Actions UI

  • Shows deployment history in Environment page

  • More sophisticated branch rules

  • Better for teams with strict deployment policies

For this $6/month series: Stick with Issue-Based Approval (Approach 1). It's already configured and works perfectly.


Step 7: Commit and Push Both Workflows

Now let's activate both workflows:

# On your local machine
# Make sure you're on dev branch
git checkout dev

# Add both workflow files
git add .github/workflows/staging-deploy.yml
git add .github/workflows/staging-deploy-manual.yml

# Commit
git commit -m "Add CD pipelines for staging deployment"

# Push to dev branch
git push origin dev

What happens next:

  1. Auto-deploy workflow triggers immediately (because you pushed to dev)

  2. You'll see "πŸš€ Deploy to Staging (Auto)" running in GitHub Actions

  3. Manual workflow is now available but won't run until you trigger it

Let's watch the auto-deploy workflow first since it just started!


Step 8: Test the Auto-Deploy Workflow

The workflow is running right now. Let's watch it and approve the deployment.

Monitor the Workflow

  1. Go to your repository on GitHub

  2. Click Actions tab

  3. Click on the running "πŸš€ Deploy to Staging (Auto)" workflow

You'll see five jobs:

βš™οΈ Setup β†’ πŸ”’ Security Scan β†’ πŸ—οΈ Build & Push β†’ ⏸️ Wait for Approval β†’ πŸš€ Deploy to Staging

The first three jobs will complete in about 5-10 minutes.

Approve the Deployment (Issue-Based Method)

When the workflow reaches the "⏸️ Wait for Approval" job, it will create a GitHub Issue.

Here's what happens:

  1. Approval job creates an issue titled "πŸš€ Deploy v20241215-143052-a7f3d2c to Staging?"

  2. Issue includes deployment details:

    • Version tag

    • Docker image path

    • Branch and commit

    • Who triggered it

    • Pre-deployment checklist

To approve the deployment:

  1. Click "Issues" tab in your repository

  2. Find the deployment approval issue (should be at the top)

  3. Comment on the issue with one of these:

    • /approve

    • approved

    • lgtm

  4. Press Comment

The workflow continues immediately:

Within a few seconds of your approval comment:

  1. Approval job completes βœ…

  2. Deploy job starts automatically

  3. SSH to server

  4. Runs deploy-staging.sh with new version

  5. Waits for startup

  6. Performs health checks

  7. Shows success or failure

  8. Issue automatically closes

If you want to deny the deployment:

Comment /deny or denied on the issue - the workflow will cancel.

If you don't respond:

After 6 hours, the workflow auto-cancels and the issue closes.

What Success Looks Like

In the workflow summary:

## πŸŽ‰ Deployment Complete
**Version:** `v20241215-143052-a7f3d2c`
**Status:** βœ… Success
**URL:** https://api.yourdomain.com

In your browser:

Visit https://api.yourdomain.com/admin - you should see your Strapi admin panel with the latest code!

What Failure Looks Like

If deployment fails:

## ❌ Deployment Failed
Check deployment logs on server

Troubleshooting steps:

# SSH to your server
ssh deploy@YOUR_STAGING_SERVER_IP

# Check deployment logs
tail -50 /opt/strapi-backend/deployment.log

# Check if containers are running
cd /opt/strapi-backend
docker compose -f docker-compose.stg.yml ps

# Check container logs
docker compose -f docker-compose.stg.yml logs --tail=50 strapi-backend

Common issues:

  • Wrong container names in deploy-staging.sh

  • Server not accessible via SSH

  • Image not found in GHCR (check registry path)

  • Health check timeout (server might be slow)

  • Wrong approvers list (check APPROVERS env var matches your GitHub username)


Step 9: Test the Manual-Dispatch Workflow

Now let's test deploying from a feature branch using the manual workflow.

Create a Test Feature Branch

# On your local machine
git checkout -b feature/test-manual-deploy

# Make a small change
echo "# Manual Deploy Test" >> README.md

# Commit and push
git add README.md
git commit -m "Test manual deployment workflow"
git push -u origin feature/test-manual-deploy

Trigger the Manual Workflow

  1. Go to GitHub β†’ Actions tab

  2. Click "πŸš€ Deploy to Staging (Manual)" (left sidebar)

  3. Click "Run workflow" button (right side)

You'll see a form:

  • Use workflow from: Select feature/test-manual-deploy

  • Version tag: Leave blank (or enter custom like v2.0.0-test)

  • Click "Run workflow"

Watch the Deployment

The workflow runs immediately (no approval needed):

  1. Setup job converts names to lowercase

  2. Security scan checks for vulnerabilities

  3. Build job creates and pushes Docker image

  4. Deploy job runs immediately (no waiting)

  5. Health check verifies deployment

Timeline:

  • Total: 7-10 minutes

  • No approval delay

  • Deploys directly from feature branch

Verify the Deployment

After success:

# On your server
tail -20 /opt/strapi-backend/deployment.log

# Should show deployment of your feature branch version
# Version will be something like: v20241215-150422-b8e4f1a

Check your site: Visit https://api.yourdomain.com/admin - your feature branch code is now running in staging!


When to Use Which Workflow

Now that you have both workflows working, here's how to choose:

Use Auto-Deploy (staging-deploy.yml) When:

βœ… Working on a team:

  • Multiple developers need approval before staging updates

  • Want to prevent accidental deployments

  • Need audit trail of who approved what

βœ… Following standard DevOps practices:

  • Merge to dev β†’ Review β†’ Approve β†’ Deploy

  • Clear separation between "code merged" and "code deployed"

  • Safer for production-like environments

βœ… Learning deployment workflows:

  • Approval gates teach proper deployment discipline

  • Forces you to review changes before they go live

  • Good practice for when you move to production

Typical workflow:

  1. Develop in feature branch

  2. Create PR to dev

  3. CI validates (Part 5a)

  4. Merge after review

  5. Auto-deploy triggers

  6. Approve deployment

  7. Staging updates

Use Manual-Dispatch (staging-deploy-manual.yml) When:

βœ… Solo developer or tiny team:

  • Trust each other completely

  • Want speed over process

  • Don't need approval gates

βœ… Testing feature branches:

  • Want to test feature/new-ui in staging before merging

  • Need to demo unfinished work to stakeholders

  • Testing multiple features simultaneously

βœ… Emergency hotfixes:

  • Bug in production needs immediate testing

  • Can't wait for PR approval process

  • Need to deploy from hotfix/critical-bug branch ASAP

βœ… Flexible deployment control:

  • Sometimes deploy dev, sometimes feature branches

  • Want to choose exactly what gets deployed

  • Need custom version tags for releases

Typical workflow:

  1. Push feature code to any branch

  2. Go to Actions β†’ Run workflow

  3. Select branch

  4. Click run

  5. Staging updates immediately

Can You Use Both?

Absolutely! Many teams keep both:

  • Auto-deploy for regular dev β†’ staging updates

  • Manual for testing features and emergencies

They don't conflict - just different triggers deploying to the same environment.


Extending for Multiple Environments

Right now both workflows deploy to your staging environment. Here's how to extend them for production, UAT, or other environments:

Multi-Environment Strategy:

Option 1: Separate Workflows Per Environment

Create multiple workflow files:

  • staging-deploy.yml β†’ Deploys to staging

  • uat-deploy.yml β†’ Deploys to UAT environment

  • production-deploy.yml β†’ Deploys to production

Each workflow:

  • Uses different GitHub Environment (staging, uat, production)

  • Uses different secrets (STAGING_, UAT_, PRODUCTION_*)

  • Different approval requirements (production might need 2 approvers)

  • Different triggers (production might only deploy from main branch)

Option 2: Environment Selector in Manual Workflow

Extend the manual workflow to let you choose the environment:

on:
  workflow_dispatch:
    inputs:
      environment:
        description: 'Environment to deploy to'
        required: true
        type: choice
        options:
          - staging
          - uat
          - production
      version:
        description: 'Version tag'
        required: false
        type: string

Then in the workflow:

deploy:
  name: πŸš€ Deploy to ${{ inputs.environment }}
  environment: ${{ inputs.environment }}
  steps:
    - name: πŸš€ Deploy
      run: |
        # Use environment-specific secrets
        ssh -i ~/.ssh/deploy_key \
          ${{ secrets[format('{0}_USER', upper(inputs.environment))] }}@\
          ${{ secrets[format('{0}_HOST', upper(inputs.environment))] }}

This gives you:

  • One workflow file

  • Dropdown to select staging/uat/production

  • Different secrets per environment

  • Different approval rules per environment

When to Add Multiple Environments:

Start simple (what we have now):

  • One staging environment

  • Two deployment workflows

  • Learn the patterns first

Add environments when:

  • You have real users (need production)

  • Client wants UAT for acceptance testing

  • Compliance requires separate environments

Don't add environments until you need them. Every environment adds complexity, another server to maintain, another set of secrets, another approval process.


Rollback Procedures

Deployments sometimes fail. Here's how to handle it:

Automatic Rollback (Built Into deploy-staging.sh)

The deployment script automatically rolls back if:

  • Image pull fails

  • docker-compose up fails

  • Health check fails after deployment

What happens:

  1. Deployment script detects failure

  2. Restores docker-compose backup

  3. Restores database from pre-deployment backup

  4. Restarts containers with previous version

  5. Logs the rollback

You don't need to do anything - it happens automatically.

Manual Rollback (Using rollback-staging.sh)

The rollback-staging.sh script (created in Step 3.5) provides several rollback options for when you need to manually revert changes:

Option 1: Check Current Status First

# SSH to server
ssh deploy@YOUR_STAGING_SERVER_IP
cd /opt/strapi-backend

# Check what's deployed and available rollback versions
./deployment-scripts/rollback-staging.sh status

Expected output:

==========================================
CURRENT STATUS
==========================================

Current Version:
  v20241208-143000-def5678

Container Status:
NAME              IMAGE                                    STATUS
strapi-backend    ghcr.io/you/your-repo:v20241208...      Up
strapiDB          postgres:16-alpine                       Up

Recent Deployment History:
2024-12-08 12:00:15 | v20241208-120000-abc1234 | SUCCESS
2024-12-08 14:30:22 | v20241208-143000-def5678 | SUCCESS

Available Rollback Versions:
  v20241208-120000-abc1234
  v20241208-143000-def5678

Available Database Backups:
-rw-r--r-- 1 deploy deploy 4.2M Dec  8 14:30 predeployment_v20241208_143052.sql.gz
-rw-r--r-- 1 deploy deploy 4.1M Dec  8 12:00 predeployment_v20241208_120015.sql.gz

Option 2: Rollback Application Only (Recommended)

# Rollback to previous version automatically
./deployment-scripts/rollback-staging.sh app

# Or rollback to specific version
./deployment-scripts/rollback-staging.sh app v20241208-120000-abc1234

This rolls back just the application code, keeping your current database data intact.

Option 3: Rollback Database Only

# Restore database from specific backup
./deployment-scripts/rollback-staging.sh database predeployment_v20241208_143052.sql.gz

Use this when the app is fine but database migration went wrong.

Option 4: Full Rollback (App + Database)

# Rollback everything to last known good state
./deployment-scripts/rollback-staging.sh full

This reverts both application and database to the last successful deployment.

Option 5: Re-run Previous Workflow (Via GitHub)

  1. Go to GitHub β†’ Actions tab

  2. Find the last successful deployment

  3. Click "Re-run all jobs"

  4. This redeploys the previous version

Option 6: Deploy Specific Version (Via GitHub)

  1. Go to GitHub β†’ Actions tab

  2. Select "πŸš€ Deploy to Staging (Manual)"

  3. Click "Run workflow"

  4. Select the Tags tab (if you've created Git tags for releases)

    • Choose the specific tag you want to deploy (e.g., v2.0.0-rc1)

    • Or stay on Branches tab to build from a branch

  5. Enter the version in the input field

  6. Click "Run workflow"

πŸ’‘ Best Practice: Create Git tags for your releases:

# Create a tag for your release
git tag -a v2.0.0-rc1 -m "Release v2.0.0-rc1"
git push origin v2.0.0-rc1

# Now you can select this tag in the GitHub workflow UI

Rollback Best Practices:

  • Always check status first - ./deployment-scripts/rollback-staging.sh status shows current state

  • Test rollback during setup - Not during an emergency

  • Use app-only rollback when possible - Preserves database changes

  • Create safety backups - Script does this automatically

  • Keep deployment history - deployment-history.txt tracks all deployments

  • Don't delete old Docker images in GHCR - You need them for rollbacks

  • Document your rollback steps - For team members who need to help


What We've Accomplished

Let's recap what your complete CI/CD pipeline now includes:

From Part 5a (CI):

  • βœ… Automated code quality checks (ESLint)

  • βœ… Security vulnerability scanning

  • βœ… Docker build verification

  • βœ… Green checkmark on every commit

From Part 5b (CD):

  • βœ… Automated Docker image building and pushing to GHCR

  • βœ… Two deployment workflow options:

    • Auto-deploy with approval gates (safe for teams)

    • Manual-dispatch from any branch (flexible for testing)

  • βœ… SSH-based deployment to DigitalOcean

  • βœ… Automated health checks after deployment

  • βœ… Automatic rollback on failure

  • βœ… Pre-deployment database backups

  • βœ… Complete deployment logging

The Complete Flow:

For Regular Development:

Push to feature β†’ CI validates (2-5 min) β†’ Green checkmark
↓
Merge to dev β†’ Security scan β†’ Build image β†’ Push to GHCR
↓
Approve deployment β†’ SSH to server β†’ Run deploy script
↓
Health check β†’ Success βœ… or Auto-rollback ❌

For Testing Features:

Push to feature branch β†’ CI validates β†’ Green checkmark
↓
Click "Run workflow" β†’ Select branch β†’ Deploy immediately
↓
Health check β†’ Success βœ… or Auto-rollback ❌

And you're still at $6/month for your DigitalOcean infrastructure. The CI/CD pipeline uses GitHub's free tier.


Series Conclusion: What You've Built

Over this 5-part series, you've built a complete deployment environment from scratch:

Part 1: Containerization

  • Multi-stage Docker builds for Strapi v5

  • Optimized images (500-700MB vs 1.5GB+)

  • GitHub Container Registry integration

  • Production-ready containerization

Part 2: Infrastructure

  • DigitalOcean droplet deployment

  • Docker Compose orchestration

  • PostgreSQL database setup

  • Proper user permissions and security

Part 3: Web Server

  • Nginx reverse proxy configuration

  • Free SSL certificates with Let's Encrypt

  • Custom domain setup

  • Security headers and logging

Part 4: Data Protection

  • Automated daily backups to S3

  • Smart lifecycle management (120-day retention)

  • Tested restore procedures

  • Cost: ~$0.001/month

Part 5: Automation

  • Complete CI/CD pipeline with GitHub Actions

  • Automated validation (security, quality, builds)

  • Two deployment workflow options

  • Health checks and rollback capabilities

Total Monthly Cost:

  • DigitalOcean: $6.00

  • S3 Backups: $0.001

  • GitHub Actions: $0 (free tier)

  • Total: $6.001/month

What You Learned:

  • Docker containerization and multi-stage builds

  • Cloud infrastructure management (DigitalOcean)

  • Reverse proxies and SSL certificates

  • Backup strategies and disaster recovery

  • CI/CD pipeline design and implementation

  • GitHub Actions and workflow automation

  • SSH-based deployment

  • Health checks and monitoring

  • Rollback procedures

Skills That Transfer:

  • These patterns work on AWS, GCP, Azure

  • GitHub Actions skills apply to any repository

  • Docker knowledge works anywhere

  • CI/CD concepts are universal

  • Infrastructure-as-code thinking


When to Upgrade

You've built a solid staging environment. Here's when to level up:

Infrastructure Upgrades:

From $6 DigitalOcean β†’ Managed Services:

When you see:

  • Database consistently over 50MB

  • Regular "out of memory" errors

  • More than 100 concurrent users

  • Deployments taking >10 minutes

  • You're making real revenue

Upgrade to:

  • DigitalOcean Managed Database ($15/month)

  • Or AWS RDS (~$15-30/month)

  • Separates database from application

  • Automated backups and monitoring

  • Better performance and reliability

From Single Server β†’ Load Balanced:

When you see:

  • Traffic spikes crashing your server

  • Need 99.9% uptime guarantees

  • Multiple geographic locations

  • Compliance requirements

Upgrade to:

  • Multiple application servers

  • Load balancer (DigitalOcean or AWS ALB)

  • Auto-scaling groups

  • Multi-region deployment

CI/CD Upgrades:

Add Automated Tests:

When you have:

  • Unit tests written

  • Integration tests ready

  • End-to-end test suite

Extend workflows with:

test:
  runs-on: ubuntu-latest
  steps:
    - run: npm test
    - run: npm run test:e2e

Add Performance Monitoring:

When you need:

  • Response time tracking

  • Error rate monitoring

  • User experience metrics

Integrate:

  • Sentry for error tracking

  • DataDog or New Relic for APM

  • Custom metrics to CloudWatch

Multi-Environment Pipeline:

When you have:

  • Staging working perfectly

  • Ready for production

  • Need UAT environment

Create:

  • production-deploy.yml for prod

  • uat-deploy.yml for UAT

  • Different approval requirements

  • Environment-specific configurations


Quick Reference

Workflow Triggers:

Auto-Deploy:

# Triggers automatically when you:
git push origin dev  # Or merge PR to dev

Manual-Dispatch:

# Trigger via GitHub UI:
Actions β†’ "πŸš€ Deploy to Staging (Manual)" β†’ Run workflow

Common Commands:

On Server:

# View deployment logs
tail -f /opt/strapi-backend/deployment.log

# Check running containers
docker compose -f docker-compose.stg.yml ps

# View application logs
docker compose -f docker-compose.stg.yml logs --tail=50 strapi-backend

# Manual deployment
./deployment-scripts/deploy-staging.sh v20241215-143052-a7f3d2c

# Check GHCR images
docker images | grep ghcr.io

Workflow Management:

# Re-run workflow (GitHub UI)
Actions β†’ Select workflow run β†’ Re-run all jobs

# Cancel workflow (GitHub UI)
Actions β†’ Select workflow run β†’ Cancel workflow

# View workflow logs (GitHub UI)
Actions β†’ Select workflow run β†’ Click job β†’ Expand steps

File Locations:


Final File Structure After Complete Series:

your-strapi-project/
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       β”œβ”€β”€ ci.yml                    # Part 5a - CI pipeline
β”‚       β”œβ”€β”€ staging-deploy.yml        # Part 5b - Auto-deploy
β”‚       └── staging-deploy-manual.yml # Part 5b - Manual deploy
β”œβ”€β”€ src/                              # Your Strapi code
β”œβ”€β”€ config/
β”œβ”€β”€ public/
β”œβ”€β”€ Dockerfile.prod                   # Part 1
β”œβ”€β”€ docker-compose.stg.yml            # Part 2
β”œβ”€β”€ .env.stg                          # Part 2
β”œβ”€β”€ package.json
β”œβ”€β”€ package-lock.json
β”œβ”€β”€ .dockerignore
β”œβ”€β”€ .gitignore
└── README.md

On Server (/opt/strapi-backend/):

/opt/strapi-backend/
β”œβ”€β”€ deployment-scripts/               # Part 5b
β”‚   β”œβ”€β”€ deploy-staging.sh             # Part 5b - Deployment script
β”‚   └── rollback-staging.sh           # Part 5b - Rollback script
β”œβ”€β”€ deployment.log                    # Part 5b - Deployment history
β”œβ”€β”€ deployment-history.txt            # Part 5b - Deployment tracking
β”œβ”€β”€ docker-compose.stg.yml            # Part 2
β”œβ”€β”€ .env.stg                          # Part 2
β”œβ”€β”€ backup-script.sh                  # Part 4
β”œβ”€β”€ restore-script.sh                 # Part 4
β”œβ”€β”€ check-backups.sh                  # Part 4
└── backups/
    β”œβ”€β”€ backup.log                    # Part 4
    β”œβ”€β”€ strapi_backup_*.sql.gz        # Part 4
    └── pre_deploy_*.sql.gz           # Part 5b

Congratulations! πŸŽ‰

You've built a complete, professional CI/CD pipeline for your Strapi v5 backend.
So yeah, that's the core setup. We've built a complete deployment pipeline for $6/month (plus a few cents for S3 backups).

If there's enough interest, I might write a follow-up about the real-world performance of this setup, actual costs after running it for months, uptime stats, what breaks when you push this budget setup to its limits, and when you know it's time to upgrade.

For now though, you've got everything you need to deploy, iterate, and scale. The foundation is solid.

What started as a $6/month staging experiment is now a fully automated deployment system with:

  • Continuous integration validating every commit

  • Automated Docker image builds

  • Two deployment workflow options

  • Health checks and automatic rollback

  • Complete logging and monitoring

More importantly, you understand:

  • How Docker containerization works

  • How CI/CD pipelines are built

  • How to deploy with GitHub Actions

  • How to maintain and troubleshoot deployments

  • When to upgrade and when not to

This knowledge transfers to any platform - AWS, GCP, Azure, or whatever comes next.

Thanks for following along through all five parts. Happy deploying! πŸš€


Questions about the CI/CD setup or running into deployment issues? Drop a comment and I'll help troubleshoot.

Building a Complete Deployment Environment for Strapi v5: A Practical Series

Part 1 of 7

Learn how to build a production-ready, budget-friendly staging environment for Strapi v5 using Docker, DigitalOcean, and modern DevOps practices. Complete with automated backups and CI/CD.

Up next

CI/CD Pipeline Part 1: Automated Builds and Security Scanning with GitHub Actions

Part 5a of "Building a Complete Deployment Environment for Strapi v5: A Practical Series"