A cute cartoon image of a Python, a whale, and an elephant as friends, which respectively demonstrate Python, Docker, and PostgreSQL.

Adding PostgreSQL to Django with Docker Compose

Written by

Tagged as:

Django Docker PostgreSQL

In my previous post, I outlined how we could use Docker to create an isolated local development environment for our Django apps. We used Docker and Poetry to install our Python dependencies into a container so that we never run into incompatibility issues with other projects we might be working on and other developers can quickly and easily get started on our project.

This post’s starting point is assuming you already have a Docker container for your Django app. If you don’t, I’d highly recommend checking out the previous post.

Docker Compose

When we run Django and PostgreSQL on our laptop, we can run PostgreSQL in the background on the same machine Django is running on. If we think of our Docker container as the equivalent of our laptop, then we might want to run PostgreSQL on our Docker container alongside our Django application.

Unfortunately, Docker doesn’t have a good way of doing this. We could try installing PostgreSQL and then running it in the same command that we use to start Django, but there is a better way! Enter Docker Compose.

Docker Compose is a way of running multiple containers alongside each other. These Docker containers can easily network with each other, making Docker Compose a great way to simulate a production environment locally.

To get started, we create a yaml file called docker-compose.yaml.

version: '3'
services:
    ...

Services contain each of the Docker containers that we want to run. Let’s add our Django app first.

version: '3'
services:
  web:
    build: ./
    volumes:
      - ./:/app
    ports:
      - 8000:8000

If you look closely, you can see there are some obvious similarities between this and our run command from the last post.

Adding PostgreSQL to our Docker Compose file

We can also easily add a database using the postgres:16.2 Docker image and configure it with the environment section:

version: '3'
services:
  web:
    build: ./
    volumes:
      - ./:/app
    ports:
      - 8000:8000
  database:
    image: 'postgres:16.2'
    environment:
      - POSTGRES_DB=appdb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - ALLOW_IP_RANGE='0.0.0.0/0'

An important thing to know, especially when running PostgreSQL, is that Docker containers are ephemeral and their storage is cleared between runs. This is not how we want our database to behave. We can use Docker-managed volumes to ensure that the database data is persistent between runs. Our final docker-compose.yaml file should look like this:

version: '3'
version: '3'
services:
  web:
    build: ./
    volumes:
      - ./:/app
    ports:
      - 8000:8000
  database:
    image: 'postgres:16.2'
	volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=appdb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - ALLOW_IP_RANGE='0.0.0.0/0'

volumes:
  pgdata:

Before we run this, let’s connect our Django app to our database.

Connecting to Postgres from Django

The first thing that we need to do is to tell Django that it can connect using PostgreSQL. We can do this in the myapp/settings.py file.

First, we need to import os into this file, because we want to access environment variables. If we search for “DATABASES” in this file, we’ll need to modify the variable so that it looks like this:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': os.environ.get("DATABASE_NAME", "appdb"),
        'USER': os.environ.get("DATABASE_USER", "postgres"),
        'PASSWORD': os.environ.get("DATABASE_PASSWORD", "password"),
        'HOST': os.environ.get("DATABASE_HOST", "database"),
        'PORT': os.environ.get("DATABASE_PORT", "5432")
    }
}

There are a couple of things to notice here. First, we’ll see that the Django app can connect to the database using the name that we gave it in our docker-compose.yaml file. In this case, it’s database. Secondly, we didn’t map port 5432 to anything in our docker-compose.yaml, yet we can access it here. This is because port forwarding is only necessary when you want to access a Docker container’s port on the host machine, but containers inside of Docker Compose can talk to other containers in the same Compose file using whatever ports they want.

This example uses default values so that we don’t have to set environment variables to connect to PostgreSQL locally. But we could also use os.environ["DATABASE_NAME"] so that it throws an error if the database fields aren’t set, which would be safer in production.

Adding psycopg2

We’re using psycopg2 here for our database engine, but we haven’t installed it. If you were to run the Docker Compose file right now, you might see an error:

django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 or psycopg module

To fix this, we can simply add it using:

docker build -t myapp .
docker run -v $(pwd):/app -it myapp poetry add psycopg2

This will update pyproject.toml and poetry.lock. Another note here is that I like to keep poetry installed on my local machine so that I can add dependencies without having to rebuild the Docker container, but the example above shows how you would do it if you wanted to keep your development environment truly isolated.

Lastly, psycopg2 has some OS-level dependencies that you can install on your container through your Dockerfile, namely libpq-dev and postgresql. Also, because we’re using slim-bullseye, we’ll need to install gcc to get those to work. At the end of the day, your Dockerfile should look like this:

FROM python:3.12-slim-bullseye

WORKDIR /app

RUN apt-get update -y && apt-get install -y gcc libpq-dev postgresql
RUN pip3 install poetry

COPY pyproject.toml .
COPY poetry.lock .

RUN poetry install

CMD [ "poetry", "run", "python3", "manage.py", "runserver", "0.0.0.0:8000" ]

You should now be able to run this with docker compose up --build. The cool thing about this is that other developers only need to run this command to start all of your project’s components.

Using Docker Compose Health Checks

It’s possible for the webserver to start before the database is ready to accept connections. That may have happened to you when you ran docker compose up at the end of the last section.

Thankfully, Docker Compose allows us to add healthchecks and specify dependencies between containers. Let’s do that here to prevent Django from starting before our database can accept connections.

Adding Database Health Checks to Docker Compose

In our docker-compose.yaml, we can add a health check using the following syntax:

version: '3'
services:
  web:
    build: ./
    volumes:
      - ./:/app
    ports:
      - 8000:8000
  database:
    image: 'postgres:16.2'
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=appdb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - ALLOW_IP_RANGE='0.0.0.0/0'
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10

volumes:
	pgdata:

This healthcheck runs the command pg_isready which checks the database status and exits with a non-zero error code if it’s not ready. It will try up to 10 times, checking once every 10 seconds, until it fails. Once it succeeds, it’s considered healthy.

Add Dependencies Between Containers

We can tell Docker Compose to wait until one container is up and healthy using the following lines:

version: '3'
services:
  web:
    build: ./
    volumes:
      - ./:/app
    ports:
      - 8000:8000
    depends_on:
      database:
        condition: service_healthy
  database:
    image: 'postgres:16.2'
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=appdb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - ALLOW_IP_RANGE='0.0.0.0/0'
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10

volumes:
	pgdata:

An important note here is that there is a difference between depends_on: database and condition: service_healthy. If we omit condition: service_healthy, Docker Compose will try to start up our Django app right after it finishes starting up our database. There’s a period in which PostgreSQL is starting, but not ready to accept connections from Django. We may run into the same issue in this case. Adding condition: service_healthy causes Docker Compose to wait until PostgreSQL can accept connections.

Issuing Commands to Django

In the previous post, we used docker run to issue commands. However, if we did that now, docker run would fail because docker run doesn’t know about our database.

We can achieve the same results with docker exec. This allows us to execute commands on our running container, so Django must be running. We can replace our script from the previous post with:

#!/bin/bash
docker exec -it myapp-web-1 poetry run ./manage.py "$@"

Docker Compose automatically names our container {directory}-{name}-1.

Other Useful Commands

You can restart your Docker containers with:

docker compose restart myapp-web-1

You can add new poetry dependencies with:

docker exec -it myapp-web-1 poetry add {dependency}

You can trigger a rebuild by running:

docker compose up --build --force-recreate --no-deps -d web

As a reference, if you need the code from this post, check it out here.

Back