A humorous image of the Python logo being carried by a whale, which is typically used as the logo for Docker.

How (and Why) to Use Docker for Django Local Development

Written by

Tagged as:

Django Docker

It’s incredibly easy to get up and running with a new project using Django. After making sure that Python is installed on your computer, which it probably is, you can issue two simple commands from your terminal to start a new Django project.

pip install django
django-admin startproject mysite

Now, let’s say you need to connect an external database like PostgreSQL to your Django application. You can easily install the database with brew on Mac OS X or a package manager like apt-get on Linux, if you don’t have it already, and connect to localhost:5432 on your laptop.

You may also need a Python package to be able to connect to it. For PostgreSQL above, you usually need psycopg2, which you can install with pip install psycopg2.

Problems with this Approach

For many Django projects, there is absolutely nothing wrong with the series of steps that I took in the previous section. However, using system-wide package management, such as brew and pip, has several crucial issues, especially as you add complexity to your Django project in terms of dependencies and the number of developers. A good local environment setup should be three things: isolated, portable, and reproducible; and the workflow above is none of those three things.

Isolation

Isolation is the idea that your local development environment should be completely self-contained and unaffected by dependencies from other projects that you’re working on. When we ran pip install psycopg2 and brew install postgresql, we added those dependencies directly onto our laptop. This means that if we work on other projects that require psycopg2 or postgresql then we may need to use the same version across projects for our code to work. This is generally fine when you’re talking about a difference in minor or patch versions, but can lead to headaches when there’s a difference in major versions.

The typical flow of starting and running a Django app lacks isolation because our local development environment may cause conflicts with other projects on our laptop.

Portability

Portability is simply the ability to transfer your projects between different laptops that are set up differently. This might come in the form of running different operating systems or different versions of the same operating system. It could also come in the form of different developers having isolation issues on their machines.

Because the standard Django development setup is not isolated and we install dependencies directly onto your laptop, you introduce extra variables that are dependent on how your laptop is set up. This may cause issues if you ever change computers or invite other developers to work on your project with you.

Reproducibility

Reproducibility is very simply the ease with which you can redo the steps that you’ve already done. In general, the goal of reproducible architectures is to minimize the outcome of the previous state on the next state.

For example, if you are giving directions to your house to someone, telling them to walk one hundred steps forward and two hundred steps to the left is dependent on where they start, how long their legs are, and what direction they are facing when they start. You have no guarantee that they end up at your house.

Similar to the other issues above, installing dependencies directly onto your laptop means that those dependencies are around until you manually remove them. This is stateful. So it can be incredibly difficult to determine your starting point when you invite other developers, add new dependencies, or change machines.

Enter Docker

For the uninitiated, Docker is a software tool developed in 2013 that achieves OS-level virtualization through a concept known as containers. More simply, similar to a virtual machine, Docker allows you to create an isolated virtual operating system that runs on your computer and is separate from your computer’s operating system. These virtual operating systems are created declaratively through a file called a Dockerfile.

These Dockerfiles look something like this:

FROM ubuntu:24.04

RUN apt-get update -y
RUN apt-get install -y cowsay

CMD [ "/usr/games/cowsay", "'I am in a docker container'" ]

This particular Dockerfile would create a virtual Ubuntu environment, update the package manager, install the command line tool cowsay, and print out an ASCII cow saying “I am in a docker container.”

How Docker Achieves Isolation, Portability, and Reproducibility

If you run the Docker container above and then go back to your terminal, cowsay won’t work unless you’ve previously installed it. Likewise, if you install cowsay on your laptop and remove the line that installs it from the Dockerfile above, you should get an error that mentions something about cowsay not being available. This is because Docker creates an environment that is isolated from your host operating system’s environment. No matter what you or other developers on your team have installed on your laptop, the Docker container will only contain the tools that you have declared in the Dockerfile.

The Docker container will always be built from scratch with commands in the order that they appear in the Dockerfile. This is incredibly important for reproducibility since it gives us the same starting point and the same direction every time. If you need to solve a problem with your local development setup or add a new dependency, you’ll be able to do so from a fresh install. The builds act more or less deterministically.

This also helps Docker be incredibly portable. Firstly, the only thing that the developers on your team need to install is Docker to be able to run the Dockerfile for your local environment. And because the builds are essentially deterministic, they’ll be guaranteed to be able to get their local environment to the same state that your development environment has.

Using Docker Versus Virtualenv

Python aficionados may be wondering how Docker differs from creating a virtual environment since a virtual environment also helps us isolate our installed Python dependencies. Virtual environments are great, but Docker offers two primary advantages over just a simple Python virtual environment.

The first advantage is that Python environments created with Docker are rebuilt from scratch when the dependencies change. When working with a Python virtual environment, someone on your team may update a package and you may not get the updated version right away. This isn’t a big deal since you can destroy and recreate your virtual environment, but you get this behavior almost for free with Docker.

The second advantage is that virtual environments are limited to your Python dependencies. Docker can be used to create dependencies outside of your Python app, such as your database. We’ll see this in action in the next section as well.

Using Docker with Django

Hopefully, you’re convinced that using Docker to set up your local environment is a worthwhile endeavor, and you’re ready to learn how to do it because we’re going to get started in this section.

The goal of this section, you’ll have a Django app running locally in Docker. This environment will be completely isolated, portable, and reproducible.

While this post does not intend to be a tutorial for how to use Docker, I’ll do my best to explain as much as you need to understand to use it to set up your local environment. At the end of this post, I’ll link to some good Docker tutorials as well as another tutorial for setting up PostgreSQL.

Before You Start

You do need one thing on your laptop to get started, which is Docker itself. You can find the installation instructions for Docker by clicking here. You’re also going to want to create a directory for your project if you don’t already have one.

Lastly, you need to know what version of Python you are running or want to run for this project. There’s no right answer to this, as long as the Python version hasn’t reached the end of life. You can check the versions of Python here. At the time of writing, the current stable feature version is 3.12, so we’re going to use that, but you can substitute your version with all of the places that you see 3.12 in my code.

Creating the Dockerfile

Remember from the section above that the blueprint for a Docker container is a Dockerfile. The first line of the Dockerfile always specifies what operating system we’re going to create. In the example above, we created a virtual Ubuntu (version 24.04) environment with the syntax FROM ubuntu:24.04. You can view all the possible starting points at DockerHub.

We’re going to use the tag python:3.12-slim-bullseye for our environment. You may be thinking that python is not an operating system like ubuntu, and you’re right. Technically speaking, the names don’t have to correspond directly to an operating system. In this case, we’re using the python tag because it’s created for Python projects. This one specifies the version of Python installed 3.12 and the operating system slim-bullseye in the second part of the tag.

Generally, your goal when building a Docker container is to keep it as small as possible. For the Python tags, all of the versions correspond to a different flavor of Linux, with alpine generally being the smallest and slim being a modifier that indicates that we’re working with the bare-bones version of whatever OS we choose. In this case, we’re using Bullseye (a Debian version).

In my experience, alpine tends to be more complicated to run and requires a ton of installation to get all of the things that you need. In a true production environment, this is ideal. But for now, we’re going to use slim-bullseye.

FROM python:3.12-slim-bullseye

CMD [ "python", "--version" ]

The CMD line tells Docker what to run once the operating system is done building. We’re going to use python --version as a placeholder for now. You can run the Docker container with the following commands:

docker build -f Dockerfile -t mysite .
docker run -it mysite

It should print out something like Python 3.12.2 if it works correctly. You can use a specific patch version (the last number in the version) if you’d like. Omitting the patch version means that we’ll automatically get the latest patch version when we rebuild our containers from scratch. This is generally safe to do. Including it would mean that we’ll always pull the same version.

In the above command, the first line builds the docker container and the second command runs it.

The -f flag in the build command specifies which file to use. If your file is called Dockerfile specifically, you can omit this. Some people create Dockerfiles for different environments, such as dev, test, and prod and these Dockerfiles can have different names.

The -t flag in the build command gives our Docker container a name. In this case, mysite. You can also give it a version if you want, such as mysite:v1 or mysite:{git commit SHA}, but for development, it’s fine to leave this off.

Lastly, the -it flags are used to connect our terminal to the Docker container so that we can see the output and issue commands to our Docker container.

Adding Poetry for Better Dependency Management

Python’s default package manager is pip. pip is a great utility for managing dependencies, but it is pretty basic. Poetry is similar to pip that does a much better job of resolving dependency versions. It’s also very simple to use. Let’s add it to our Dockerfile.

FROM python:3.12-slim-bullseye

RUN apt-get update -y
RUN pip3 install poetry

CMD [ "python", "--version" ]

It’s generally a good idea to make sure that apt-get is up to date, so we always run that before doing anything.

We can use poetry to create our project with poetry init. poetry init will create a file called pyproject.toml which is used for declaring what our dependencies are and what versions we need. However, if we run this in our container, we’ll create the files inside of the container and not be able to access them in our operating system since they are isolated from each other. Docker has a way of dealing with this: volumes.

A volume is a way of sharing files between the host operating system and the docker container. They are created when you run the Docker container with the -v flag. So we can create one by running docker with the following command:

docker build -f Dockerfile -t myapp .
docker run -it -v $(pwd):/app myapp ls

Every time we make a change to our Dockerfile, we need to rebuild the container using the build command.

Lastly, you might notice that I added ls to the end of the run command. This overrides whatever is in the CMD in the Dockerfile and issues this command. In this case, this will run ls inside of our Docker container and display what’s in the working directory.

Before we run this, we’re going to do two things. First, let’s create a file so that we know that it worked, we can do this with touch itworked.txt. Likewise, let’s modify our Dockerfile to run from the /app directory instead of / by adding:

FROM python:3.12-slim-bullseye

WORKDIR /app

RUN apt-get update -y
RUN pip3 install poetry

CMD [ "python", "--version" ]

Running the command above should print out Dockerfile and itworked.txt. Now, we can run poetry from inside our container by adding poetry init to the end of our run command, like so:

docker run -it -v $(pwd):/app test poetry init

This will bring up an interactive terminal in which you can install django. If you don’t install it here, you can always run the following command to install django (or any dependency).

docker run -it -v $(pwd):/app test poetry add {package}

An important note here is that every time you add dependencies with poetry, you will need to rebuild your container. Lastly, to actually resolve and install your dependencies, you can run:

docker run -it -v $(pwd):/app test poetry install --no-root

This will create a poetry.lock file, which tells poetry which versions of our dependencies to install and which versions of our dependencies’ dependencies to install. We can also tell Docker to install our dependencies every time with the following modifications to our Dockerfile:

FROM python:3.12-slim-bullseye

WORKDIR /app

RUN apt-get update -y
RUN pip3 install poetry

COPY pyproject.toml .
COPY poetry.lock .

RUN poetry install --no-root

CMD [ "python", "--version" ]

Poetry creates a virtual environment that contains all of our dependencies. We can run commands in it with the command poetry run {command}, so issuing commands inside of our docker container should look like this:

docker run -it -v $(pwd):/app test poetry run {command}

Adding our Django Project

We can run django commands inside of the virtual environment that poetry creates:

docker build -f Dockerfile -t test .
docker run -it -v $(pwd):/app test poetry run django-admin startproject myapp

As a personal preference, I like to copy the myapp directory into the project directory.

mv myapp/ myapp2/
mv myapp2/* .
rm -rf myapp2

The last Docker concept that we need to understand to get Django to run is port forwarding. When we run Django with manage.py runserver, it will start running in the container, but we can’t go to localhost:8000 and see the app unless we tell Docker to route port 8000 on our laptop to the container. This is fairly easy with -p 8000:8000 in the run command. If we run:

docker run -it -p 8000:8000 -v $(pwd):/app test poetry run python3 manage.py runserver 0.0.0.0:8000

We should be able to go to localhost:8000 and see our Django app. Lastly, let’s copy that command into the last line of our Dockerfile so we don’t have to type all that out anymore. Our final Dockerfile should look like this:

FROM python:3.12-slim-bullseye

WORKDIR /app

RUN apt-get update -y
RUN pip3 install poetry

COPY pyproject.toml .
COPY poetry.lock .

RUN poetry install --no-root

CMD [ "poetry", "run", "python3", "manage.py", "runserver", "0.0.0.0:8000" ]

A rebuild and a rerun should yield the same results.

It should be noted that, while Docker is a great tool for production deployments, this Dockerfile should not be used for production purposes.

Issuing Commands

If you’re familiar with Django, you know just how important it is to be able to run management commands, such as migrate.

You can issue commands to a running docker container with the incredibly verbose:

docker run -it myapp poetry run ./manage.py

I typically like to create a simple shell script to cut down on the verbosity.

#!/bin/bash
docker run -it myapp poetry run ./manage.py "$@"

After running chmod +x ./scripts/manage to make the script executable, I can then run migrate with:

./scripts/manage migrate

There you have it! A local Django development environment running inside of Docker. I also have a post here that outlines how we can add PostgreSQL to our local development environment with Docker.

Likewise, if you ever need the code from this post, check it out here.

Docker Tutorials

This post hopefully helped you get up and running with Docker. There’s a lot more to Docker than just what I’ve covered here, and there’s a lot that I’ve left out. Here are some tutorials that I’ve found helpful for learning Docker myself.

If you’re able to spend money on a course, I’d highly recommand Stephen Grider’s Docker and Kubernetes Udemy Course. Stephen is a great engineer and a great teacher.

There is also this free course available on YouTube by Programming with Mosh is also quite good!

For people who like to read, as opposed to watch or listen, a great place to start is Docker’s official getting started guide.

Back