Skip to main content

Command Palette

Search for a command to run...

Containerizing Strapi v5 for Production: The Right Way

Part 1 of "Building a Complete Deployment Environment for Strapi v5: A Practical Series"

Updated
11 min read
Containerizing Strapi v5 for Production: The Right Way

Series Overview: This is Part 1 of a 5-part series where we build a complete $6/month staging environment for Strapi v5. We'll cover containerization, deployment, web server setup, automated backups, and CI/CD pipelines. If you haven't read the introduction yet, start here to see what we're building and why.

New here? Each article in this series works as a standalone guide. If you're only interested in containerizing Strapi, you can follow this article on its own without reading the rest of the series.

What's in this series:

  • Part 0: Introduction - Why This Setup? (Read this first if you're new)

  • Part 1: Containerizing Strapi v5 (You are here)

  • Part 2: Deploying to DigitalOcean

  • Part 3: Production Web Server Setup

  • Part 4: Automated Database Backups

  • Part 5: CI/CD Pipeline with GitHub Actions


Alright, let's get started. Before we can deploy anything to DigitalOcean, we need to containerize our Strapi v5 app. And when I say containerize, I mean doing it in a way that'll actually work when you deploy - not just something that runs on your laptop.

In this article, we'll build a Docker image that's actually optimized for deployment. We'll use multi-stage builds to keep things lean, handle all the dependencies Strapi needs, and push it to GitHub Container Registry so it's ready to go.

No joke, once you nail this part, the deployment steps become way easier.


Why Containerize Strapi Anyway?

You might be wondering why we're bothering with Docker at all. Can't we just throw the code on a server and run npm start?

Sure, you could do that. But here's what breaks when you go that route:

  • Environment differences: "Works on my machine" becomes your life motto

  • Dependency hell: Node versions, system libraries, PostgreSQL drivers... something always breaks

  • No rollbacks: If a deployment fails, you're manually reverting files and hoping you didn't miss anything

  • Scaling problems: Adding more servers means repeating the entire setup process

Docker solves all of this. You build the image once, test it, and deploy the exact same container everywhere. If something breaks, you roll back to the previous image. Simple.

Plus, when you eventually move to AWS or Kubernetes, you'll already have containers ready to go.


What We're Building

We're creating a multi-stage Docker image that:

  • Uses Node.js 20 or 22 (both LTS) on Alpine Linux (smaller, faster)

  • Builds Strapi in one stage, runs it in another (keeps the final image lean)

  • Includes all the native dependencies Strapi needs (sharp, better-sqlite3, etc.)

  • Runs as a non-root user for security

  • Weighs in at a reasonable size (typically 600-900MB vs 1.5GB+ without optimization)

The final image will be stored in GitHub Container Registry (GHCR), which is free for public repositories and dirt cheap for private ones.


Prerequisites

Before we start, make sure you've got:

  • A working Strapi v5 project locally

  • Docker Desktop installed (with buildx support for Mac/Windows users)

  • A GitHub account (for Container Registry)

  • Basic terminal skills

If you don't have a Strapi project yet, spin one up:

npx create-strapi-app@latest my-project --quickstart
cd my-project

Choosing Your Node.js Version

As of November 2024, you've got two solid LTS options:

Node.js 20 (Current LTS)

  • Stable and battle-tested

  • What we're using in this series: node:20.17.0-alpine3.20

  • Supported until April 2026

Node.js 22 (Latest LTS)

  • Newer features and performance improvements

  • Available as: node:22-alpine or node:22.11.0-alpine3.21

  • Supported until April 2027

For this series, we're sticking with Node.js 20 since it's what most Strapi projects are using. But feel free to use Node.js 22 if you want the latest stuff. just swap out the version in the Dockerfile below.


The Dockerfile: Multi-Stage Build Explained

Create a file called Dockerfile in your project root. You can name it based on your environment - like Dockerfile.staging for staging, Dockerfile.prod for production, or just Dockerfile if you're using the same config everywhere. Pick whatever naming convention works for your setup.

Here's the complete Dockerfile:

# Creating multi-stage build for production
FROM node:20.17.0-alpine3.20 AS build
RUN apk update && apk add --no-cache build-base gcc autoconf automake zlib-dev libpng-dev vips-dev git > /dev/null 2>&1
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}

WORKDIR /opt/
# Copy package files first for better layer caching
COPY package.json package-lock.json ./
RUN npm install -g node-gyp
RUN npm config set fetch-retry-maxtimeout 600000 -g && npm ci --only=production
ENV PATH=/opt/node_modules/.bin:$PATH

# Copy application code after dependencies are installed
WORKDIR /opt/app
COPY . .
RUN npm run build

# Creating final production image
FROM node:20.17.0-alpine3.20
RUN apk add --no-cache vips-dev
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}

WORKDIR /opt/
COPY --from=build /opt/node_modules ./node_modules
WORKDIR /opt/app
COPY --from=build /opt/app ./
ENV PATH=/opt/node_modules/.bin:$PATH

RUN chown -R node:node /opt/app
USER node
EXPOSE 1337
CMD ["npm", "run", "start"]

Breaking Down the Dockerfile

Let's walk through what's actually happening here.

Stage 1: The Build Stage

FROM node:20.17.0-alpine3.20 AS build

We start with Node.js 20.17.0 on Alpine Linux. Alpine is a stripped-down Linux distribution that's tiny (about 5MB base). This keeps our images small and reduces the attack surface.

The AS build part names this stage so we can reference it later.

RUN apk update && apk add --no-cache build-base gcc autoconf automake zlib-dev libpng-dev vips-dev git > /dev/null 2>&1

Here's where we install all the build tools Strapi needs. Sharp (for image processing) and other native modules need these to compile. The > /dev/null 2>&1 part silences the output so you don't get a wall of text during builds.

Yeah, I know this looks like a lot of dependencies. Strapi needs them for image processing and other native modules. Trust me, you'll hit cryptic errors without these.

COPY package.json package-lock.json ./
RUN npm install -g node-gyp
RUN npm config set fetch-retry-maxtimeout 600000 -g && npm ci --only=production

This is where Docker's layer caching shines. By copying package files first, Docker can reuse this layer if your dependencies haven't changed. The npm ci command does a clean install using your lock file - more reliable than npm install for production.

The timeout config helps with flaky network connections. Nothing worse than a build failing at 90% because npm couldn't download a package.

WORKDIR /opt/app
COPY . .
RUN npm run build

Now we copy the actual application code and build Strapi. This creates the admin panel and prepares everything for production.

Stage 2: The Production Stage

FROM node:20.17.0-alpine3.20
RUN apk add --no-cache vips-dev

Fresh start with a new Alpine image. This time we only install vips-dev - the runtime dependency for Sharp. All those build tools? Left behind in the first stage. That's how we keep the final image lean.

COPY --from=build /opt/node_modules ./node_modules
COPY --from=build /opt/app ./

We copy only the compiled code and installed dependencies from the build stage. No source files, no build tools, just what we need to run.

RUN chown -R node:node /opt/app
USER node

Security best practice: never run containers as root. Alpine includes a node user by default, so we use that.

EXPOSE 1337
CMD ["npm", "run", "start"]

Expose port 1337 (Strapi's default) and set the startup command.


Create a .dockerignore File

Before building, create a .dockerignore file in your project root. This prevents Docker from copying unnecessary files into the build context:

node_modules
.git
.cache
.tmp
build
dist
*.log
.env*
.DS_Store

This speeds up builds significantly and keeps sensitive files out of your image.


Building and Verifying Your Image

Since docker buildx works on all platforms (Mac, Windows, Linux), we'll use one consistent approach. But before pushing to GHCR, let's verify the image works with PostgreSQL - the same database we'll use on DigitalOcean.

Step 1: Create a Local Testing Setup

Create a docker-compose.dev.yml file in your project root. This sets up both Strapi and PostgreSQL for local testing:

version: '3'

services:
  strapi:
    container_name: strapi-dev
    build:
      context: .
      dockerfile: Dockerfile
    restart: unless-stopped
    env_file: .env
    environment:
      DATABASE_CLIENT: ${DATABASE_CLIENT}
      DATABASE_HOST: strapiDB
      DATABASE_PORT: ${DATABASE_PORT}
      DATABASE_NAME: ${DATABASE_NAME}
      DATABASE_USERNAME: ${DATABASE_USERNAME}
      DATABASE_PASSWORD: ${DATABASE_PASSWORD}
      JWT_SECRET: ${JWT_SECRET}
      ADMIN_JWT_SECRET: ${ADMIN_JWT_SECRET}
      APP_KEYS: ${APP_KEYS}
      NODE_ENV: development
    ports:
      - "1337:1337"
    networks:
      - strapi-network
    depends_on:
      - strapiDB

  strapiDB:
    container_name: strapiDB-dev
    platform: linux/amd64
    image: postgres:16-alpine
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_USER: ${DATABASE_USERNAME}
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
      POSTGRES_DB: ${DATABASE_NAME}
    volumes:
      - strapi-data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    networks:
      - strapi-network

volumes:
  strapi-data:

networks:
  strapi-network:
    name: Strapi-Network
    driver: bridge

Step 2: Create Your .env File

Create a .env file in your project root (this should already exist if you've been developing locally, but here's what you need for PostgreSQL):

# Database
DATABASE_CLIENT=postgres
DATABASE_HOST=strapiDB
DATABASE_PORT=5432
DATABASE_NAME=strapi
DATABASE_USERNAME=strapi
DATABASE_PASSWORD=strapi

# Strapi
HOST=0.0.0.0
PORT=1337
APP_KEYS=your-app-key-here
API_TOKEN_SALT=your-api-token-salt
ADMIN_JWT_SECRET=your-admin-jwt-secret
JWT_SECRET=your-jwt-secret

Don't commit your .env file to Git! Make sure it's in your .gitignore.

Step 3: Build and Test Locally

Now let's verify everything works with PostgreSQL:

# Build and start both services
docker-compose -f docker-compose.dev.yml up --build

# Or run in detached mode (background)
docker-compose -f docker-compose.dev.yml up -d --build

Wait about 30-60 seconds for Strapi to initialize the database, then open http://localhost:1337/admin in your browser. You should see the Strapi admin setup page.

What's happening here:

  • Docker builds your Strapi image from the Dockerfile

  • Spins up PostgreSQL in a separate container

  • Connects Strapi to PostgreSQL

  • Exactly like it'll work on DigitalOcean

To stop everything:

docker-compose -f docker-compose.dev.yml down

# To remove volumes too (fresh start)
docker-compose -f docker-compose.dev.yml down -v

Step 4: Verify It's Working

Check the logs to make sure everything started correctly:

# View logs from both containers
docker-compose -f docker-compose.dev.yml logs

# Or follow logs in real-time
docker-compose -f docker-compose.dev.yml logs -f

# Check just Strapi logs
docker-compose -f docker-compose.dev.yml logs strapi

If you see something like Server started on port 1337 and no errors, you're golden.


Pushing to GitHub Container Registry

Now let's get your image into GHCR so we can pull it on our DigitalOcean droplet later.

Step 1: Create a GitHub Personal Access Token

  1. Go to GitHub → SettingsDeveloper settingsPersonal access tokensTokens (classic)

  2. Click "Generate new token (classic)"

  3. Give it a name like "Strapi Docker Registry"

  4. Select scope: write:packages

  5. Generate and copy the token (you won't see it again)

Step 2: Login to GHCR

For Mac/Linux:

# Set your GitHub token as an environment variable
export GITHUB_TOKEN=your_github_token_here

# Login to GHCR
echo $GITHUB_TOKEN | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin

For Windows (PowerShell):

# Set your GitHub token
$env:GITHUB_TOKEN="your_github_token_here"

# Login to GHCR
echo $env:GITHUB_TOKEN | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin

You should see: Login Succeeded

Step 3: Build and Push to GHCR

Now that you've verified it works with PostgreSQL locally, let's push to GitHub Container Registry.

Build specifically for linux/amd64 (DigitalOcean's architecture) and push:

# Build for DigitalOcean's architecture and push
docker buildx build \
  --platform linux/amd64 \
  -f Dockerfile \
  -t ghcr.io/YOUR_GITHUB_USERNAME/your-repo-name:v1.0.0 \
  --push \
  .

Replace:

  • Dockerfile with your Dockerfile name if different

  • YOUR_GITHUB_USERNAME with your actual GitHub username

  • your-repo-name with your project name

  • v1.0.0 with your version number

This might take a few minutes depending on your upload speed.

Tagging strategy: Use semantic versioning for your images:

  • v1.0.0 - Production releases

  • v1.0.0-beta1 - Beta versions

  • v1.0.0-rc1 - Release candidates

  • latest - Always points to the newest stable version

# Also tag and push as latest
docker buildx build \
  --platform linux/amd64 \
  -f Dockerfile \
  -t ghcr.io/YOUR_GITHUB_USERNAME/your-repo-name:latest \
  --push \
  .

Why separate build commands? The local docker-compose builds for your machine's architecture (which might be ARM64 on Mac M1/M2/M3). The buildx command specifically builds for linux/amd64 which DigitalOcean needs.

Step 4: Verify the Push

Go to your GitHub repository → Packages. You should see your image listed there.

Making the package public:

By default, packages are private. To make it public:

  1. Click on the package

  2. Package settingsChange visibilityPublic


Testing Your Build

Before deploying, verify your image size and that it pulled correctly:

# Pull the image you just pushed
docker pull ghcr.io/YOUR_GITHUB_USERNAME/your-repo-name:v1.0.0

# Check the image size
docker images | grep your-repo-name

You should see something in the 600-800MB range for a basic Strapi app. The exact size depends on how many plugins and dependencies you have, more plugins means a bigger image. If you're seeing over 1.5GB, something's probably off with the multi-stage build (check that you're copying from the build stage correctly)


What's Next?

And there you have it! a production-ready Strapi container sitting in GitHub Container Registry, tested and ready to deploy. The containerization work is behind us now.

In Part 2, we'll take this image and deploy it to a DigitalOcean droplet using Docker Compose. We'll set up PostgreSQL, configure networking, and get your Strapi app accessible via IP address.

The container work we did today makes deployment way easier because we're deploying the exact same image we tested locally. No surprises, no "works on my machine" problems.


Quick Reference

Build and test locally:

docker-compose -f docker-compose.dev.yml up --build

Build and push for DigitalOcean:

docker buildx build \
  --platform linux/amd64 \
  -f Dockerfile \
  -t ghcr.io/YOUR_GITHUB_USERNAME/your-repo-name:v1.0.0 \
  --push \
  .

Login to GHCR:

echo $GITHUB_TOKEN | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin

Stop local environment:

docker-compose -f docker-compose.dev.yml down

Hit any issues with the containerization? Drop a comment and I'll help you troubleshoot. In the next article, we're taking this container and deploying it to DigitalOcean. see you there!

Building a Complete Deployment Environment for Strapi v5: A Practical Series

Part 6 of 7

Learn how to build a production-ready, budget-friendly staging environment for Strapi v5 using Docker, DigitalOcean, and modern DevOps practices. Complete with automated backups and CI/CD.

Up next

From Local to Live: Your Strapi v5 Deployment Roadmap

How I built a reliable $6/month environment for Strapi v5 that works for staging, MVPs, and even light production.