Docker allows us to easily set up an isolated Django service and share our project avoiding dependencies.
In the last post, "Are environments enough? Docker will be your new friend" I explained what is Docker and how easy is to execute a Python code in a Docker container. However, the example was simple to understand but it wasn't a very common use.
Therefore, in this post, we will explain how to run a Django project in an isolated container using our own image. In addition, we will use the same image to run Django and the PostgreSQL database which is one of the advantages of Docker.
What is Django?
Django is a framework that is used to implement APIs of and entire web with its backend and frontend layers. A very common use is the deployment of models that we have trained so the clients can use it avoiding possible problems because of code modifications. There are many opportunities using this framework and, also, many advantages:
- Security implementations for the request that the server receives, like SQL Injection, XSS, CSRF...
- Access to database data without having to implement the functions from scratch.
- Due to the increase of Django use, new libraries are continuously being added to help us have safer faster microservices.
More info: Docker
Docker Implementation
Requirements.txt
First, we are going to create the requirements.txt document with all the packages we need for the project, in this case, the PostgreSQL connector and Django.
Django>=3.0,<4.0
psycopg2-binary>=2.8
Dockerfile
We have to create a Dockerfile that will be used to create our image and execute the commands sequentially. This file must contain the code below.
FROM python:3.8
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
FROM: Specifies that we need from the Docker Hub (The Docker online repository) the last python 3.8 image.
ENV: We set an environment variable so that all Django logs and messages will be shown in the terminal avoiding buffers.
RUN: We are creating a "code" folder in the root directory
WORKDIR: We change the working directory to the folder we have created inside our container.
COPY: We copy the requirements.txt file with the packages we need.
RUN: The requirements packages are installed
COPY: We copy all our files
docker-compose.yml
Unlike the first project where we ran our image with a service, in this case, we will need to have a service running for the Django web and another for the database. To be able to run in this way, we must create a new file called docker-compose.yml.
version: "3.8"
services:
db:
image: postgres
environment:
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- POSTGRES_DB=mydb
- POSTGRES_USER=test_user
- POSTGRES_PASSWORD=test_pass
volumes:
- ./pgsql:/var/lib/postgresql
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
environment:
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- POSTGRES_DB=mydb
- POSTGRES_USER=test_user
- POSTGRES_PASSWORD=test_pass
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
We are going to execute two services, as we have already said, giving the name of 'web' and 'db' to each one. The name of the service can be the one you like. These files must have a certain structure and keys, for more information Docker Compose
- image: It's similar to the FROM tag where you indicate what image you want to use.
- environment: They specify the environment variables that we want to send to our container so that they can be accessed from any side. Otherwise, you can have the variables in a file and use the env_file tag to link it.
- build: It indicate where is the directory that you want to build
- command: Which command you want to execute once the image has been built and executed, in this case, run Django.
- volumes: We specify the local directory and that of the container we want to have synchronized to make changes in real-time and avoid because of the shutdown.
- ports: Specifies the port the container uses for the application and the port it will run on our system. Is the same as using the -p parameter.
- depends_on: Indicates the dependency between the two services.
Execute docker-compose
We have to create an image with our Dockerfile configuration. To do this, we have to execute the following command that builds the "build" tag from docker-compose.yml
docker-compose -f docker-compose.yml build
Also, if we want to look up what images we have in our system:
docker images -a
Images created
There are two images, the one that is downloaded with the FROM tag and the image itself that is created from our configuration.
Before executing our Docker Compose, it's necessary to create our Django project. To do this, we have to execute the startproject command inside the web container (web is the name that we have chosen in the compose file). Also, this will execute the entire Docker Compose file and it will download the PostgreSQL image.
docker-compose run web django-admin startproject my_django_web
Images in our system
With the next command, Docker will return us the containers.
docker ps -a
There are two containers. The first container has only been responsible for executing the configuration set in the "web" of Docker Compose and the command that we have sent to generate the project. Otherwise, the second container is in charge of starting the database.
All containers
The project that has been generated in the container automatically appears in the established directory of our system. In addition, it's necessary to copy it to our directory in order to run the website.
In order to use the database we have configured in Docker Compose, the database must be changed in the project settings.
|----pgsql
|----docker-compose.yml
|----Dockerfile
|----manage.py
|----requirements.txt
|----my_django_web
|----__init__.py
|----asgi.py
|----settings.py (THIS FILE)
|----urls.py
|----wsgi.py
Database variable must be changed from the standard SQLite to PostgreSQL
###############SQLite###############
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
###############PostgreSQL###############
#We must import OS package to get the environment variables
import os
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['POSTGRES_DB'],
'USER': os.environ['POSTGRES_USER'],
'PASSWORD': os.environ['POSTGRES_PASSWORD'],
'HOST': os.environ['POSTGRES_HOST'],
'PORT': os.environ['POSTGRES_PORT'],
}
}
Finally, we have to run our Docker Compose:
docker-compose up
Check your http://127.0.0.1:8000/ and.... "voilà"! Our Django project is running in an isolated container. Moreover, it's easy to share and avoid any possible problems.
Django template
It seems that is needed a lot of configuration compared to using Django without Docker. However, once you learn how it works, it's very easy to dockerize any application.
Also, if we want to stop and remove the resources of the Docker Compose is very easy, you have to execute the following commands:
docker-compose stop
docker-compose down