Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Friday, July 7, 2017

Dockerize your Django Web Application


Docker platform is becoming more and more popular thanks to it’s ability to create, deploy, and run applications easily by using containers.  

In this guide we will talk about how to dockerize our django web application using nginx, gunicorn and postgresql.
If you are not deeply familiarized with docker, check out this Docker Basic Guide.

Prerequisites

  • Python 2.7 or 3.x
  • Install docker-compose (we can do that by running pip install docker-compose).

Create an empty directory

On this directory (we will name it "myproject") we will put all the required files to set up our docker configuration properly.

Create a subdirectory named web

This folder will contain our django project, let's name it mydjangoproject.

Place the requirements.txt file inside the Web folder

If you don't have it already, create it. It should be placed at the same level as mydjangoproject and contain all your project dependencies, including at least these three:

Django==1.11.2
gunicorn==19.7.1
psycopg2==2.6
Also inside "web", make a file named DockerFile and add the following lines:
FROM alpine

# Install required packages
RUN apk update
RUN apk upgrade
RUN apk add --update python python-dev py-pip postgresql-client postgresql-dev build-base gettext

# Initialize
RUN mkdir -p /data/web
COPY . /data/web/
WORKDIR /data/web/

#Setup
RUN pip install --upgrade pip
RUN pip install -r requirements.txt

#Prepare
RUN mkdir -p mydjangoproject/static/admin
By adding this lines we are setting up our container by installing the necessary packages such as pip and postgresql, adding our project to the container and installing the required dependencies.

Create a file called run_web.sh and place it inside the "web" folder

This file contains a script that will be executed when the container starts. We will add the following lines to it:

#!/bin/sh

python manage.py migrate                  # Apply database migrations
python manage.py collectstatic --noinput  # Collect static files

# Start Gunicorn
exec gunicorn myproject.wsgi:application \
  --bind 0.0.0.0:8008 \
  "$@"

Here we can see how we apply the migrations, collect the static files and start our gunicorn server on the localhost at the port 8008. Don’t forget to add the host '0.0.0.0' to your allowed hosts in your Django settings.

Go back to myproject directory and create a file named docker-compose.yml

This file is the one that will contain the configuration of all the services and how they interact with each other. Let’s add the following code:

version: '2'
services:
  # Postgres database
  postgres:
    restart: always
    image: postgres:latest
    volumes:
      - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
    env_file: ./env
    expose:
      - "5432"
  # Django web service
  web:
    build: ./web
    ports:
      - "3000:8008"
    env_file: ./env
    links:
      - postgres
    depends_on:
      - postgres
    volumes:
      - ./web/mydjangoproject/static:/static
    working_dir: /data/web/mydjangoproject/
    command: /data/web/run_web.sh
  nginx:
    restart: always
    build: ./nginx/
    ports:
    - "8001:8001"
    volumes_from:
    - web
    links:
    - web
    We can see here three services: “nginx”, “web” and “postgresql”. “nginx” is linked to “web” and “web” to “postgresql” in order to make them reachable between each other. Also it’s specified that “postgres” service will use the latest version of postgres image from dockerhub and both web and nginx are built using their dockerfiles. Finally, we can see that the web service will run the script written in run_web.sh when the container gets started.

    All commands can be found on the docker official page.

    Nginx configuration:

    To configure nginx we are going to create a directory inside myproject called nginx and create a configuration file called default.conf inside it. Then we will write the following lines:

    server {
       listen 8001;
       charset utf-8;
       location /static/ {
           root /;
       }
       location / {
           proxy_pass http://web:8008;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       }
    }
    On this configuration we specify the path of our static files (we have to write the same one that is specified on our settings.py file) and also we will set up the reverse proxy to redirect nginx to our gunicorn server. If you can’t see your static folder with your static files, run the command 'python manage.py collectstatic' and it will automatically collect the statics for you on the root of the project.
    Also, as nginx is built and not taken from a dockerhub image, we will create a file named Dockerfile inside our nginx directory and write the following code.
    FROM nginx:alpine
    RUN apk update
    RUN apk upgrade
    RUN apk add --update curl
    ADD default.conf /etc/nginx/conf.d/default.conf

    Finally, set up the database configuration

    This configuration will be done on the settings file from our Django project. The database section should look like this:
    
    
    DATABASES = {  
        'default': {
           'ENGINE': 'django.db.backends.postgresql_psycopg2',
           'NAME': 'postgres',
           'USER': 'postgres',
           'HOST': 'postgres',
           'PORT': 5432,
        }
    }

    We have linked the service "web" and "postgres" on the docker-compose.yml file already. That means that the "web" service will be able to establish connection with "postgres". To do that, take into account that database settings have to use the host name "postgres" to reach the container.
    The last step is to create a file named 'env' inside 'myproject' folder with the database configuration. This file is referenced in our “env_file” variable on the docker-compose file.
    We should write the following lines on it:

    
    
    POSTGRES_DB=socialNetwork
    POSTGRES_USER=root
    DB_SERVICE=postgres
    DB_PORT=5432
    POSTGRES_PASSWORD=password
    Environment variables can change depending on the database configuration. All the options can be found on the docker’s environment variables section for postgres.

    It’s done!

    Everyting should be set up now. We only need to go into 'myproject' folder (folder where docker-compose.yml is located) and run 'docker-compose build' to build the containers and 'docker-compose up' to start the containers.

    Then if you go to localhost:8001 you should see the django project working.


    Monday, July 3, 2017

    Starting with Docker

    What is Docker?
    Docker is a service that allows you to pack everything an application’s server needs for deployment to ensure that it always runs in the same environment, regardless of the OS where it is installed. 

    How does it work?

    The data is stored in “containers” which runs directly on the kernel of the Docker engine, making it more efficient than a normal virtual machine. On a container, you can save everything that can be installed on a server, such as code, system tools, and services, databases, libraries, etc.
    A container has two main parts: the image and the writable content. The last one is the content shared with the host OS, which is directly linked to the container file system. The image is where Docker stores the applications, services, and libraries that are to be installed on the container.
    This images are created from a Debian OS and can be modified to suit the application needs. Also, you can download different images from the official repositories including services like Nginx, Python, MySQL, mongo, etc., and you can even create images and upload them to the community, or to a private DockerHub repository.

    Basic commands

    Docker

    First of all, you need to install docker on your OS, you can follow the official guides here: Docker installation guide.
    Let’s start with something easy. Try running the following command:
    docker run -d -P --name nginx_container nginx
    This command creates a new container from the nginx official container. The option -d runs the container as a daemon. The -Pparameter publish the ports of the container, which means Docker binds the container ports to the host OS, in this case, the ports 32771 and 32770 are bound to the container ports 80 and 433 respectively. Using --name you can give the container a custom name.
    Now you have a nginx server running on a container. You can check this by opening your browser on 192.168.99.100:32771which is the default IP address of the Docker engine and the port bound to the container port 80. You can also run the following command to see all the containers:
    docker ps -a
    Here you can see the container id, image, last command executed, creation time, status ports and the container name.
    Now, think that you need to modify the nginx configuration to, for example, create a proxy. In order to do this you have to enter the nginx container, which can be done by running the following command:
    docker exec -it CONTAINER_ID bash
    These commands run a bash terminal on the container, allowing you to modify anything on it. The CONTAINER_ID is the id you see when you run the docker ps command, but don’t worry the only thing you need to remember is the first character of the string.
    But there is a catch, if you make changes to the container it won’t be here the next time you create your container. Although this is useful for testing configurations.
    In order to make changes to the image that you want to deploy you will need Docker compose explained in the next section.
    After making changes to the nginx configuration, you need to restart the service for the changes to be noticed. To restart a service you have to run this command:
    docker restart CONTAINER_NAME
    Also, if you wish to just stop the container you may change restart to stop. Then, to start it again just running the command with start will do the work.
    Finally, if you want to delete a container, you can run the following command:
    docker remove CONTAINER_NAME

    Docker compose

    Docker compose is a tool that lets you generate containers and its images automatically, and also bounds port or link files directly to it.
    Before you continue, you should install docker-compose. The link to the installation guide can be found below and, if you are running docker on a Mac or Windows OS, you may install the Docker set of tools:
    Docker toolbox comes with docker-compose and other useful applications, for example, kitematic, which is a tool for managing containers visually.
    Docker compose requires a configuration file called docker-compose.yml, where you define the creation rules of the container. To explain this let’s start with an example file:
    nginx:
      container_name: nginx
      image: nginx
      ports:
        - "80:80"
      volumes:
        # Nginx configuration
        - ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
      links:
        - django
    django:
      container_name: django
      build: .
      ports:
        - "32771:8000"
      volumes:
        # Django app
        - ./app:/app
    First of all please note that there are two sections, nginx, and Django, each one representing a different container. Any number of containers may be defined here, and you can give each one a custom name, with the only restriction being that that name can’t also be a property.
    Inside every section, you can see the property container_name which is the name of the composed container. Be careful with this, because this is not the name you refer to in this file.
    After that, there is either the image or build property. These lines tell the composer where to get the image for the container. You can define only one of them for each container. In order to get an image from a docker repository, you have to define image: docker_repository_name. If you want to create your own image, you have to define build: /path/to/image, which will run docker build and generate the custom image. This will be explained in the next section.
    The next property is ports, where you define the ports bound to the container, the syntax is: -”host_port:container_port”. Following the previous example now to access the nginx container 80 port you can use the ip: 192.168.99.100, assuming that the docker machine is using the default ip address.
    After that, the volumes section is defined. Here the folders and files linked to the container file system are listed. You can define them by writing: -./path/to/host/folder:/path/to/container/folder. Here, /path/to/host should be a relative path to the linked folder or file, starting from the location of the docker-compose.yml, and /path/to/container/folder is the absolute path where it will be located inside the container.
    Next, on the nginx section, an additional field called links is defined, where you can define the access from one container to another. Using the syntax: -container_name:alias, you can create an entry in /etc/hosts within the container with the alias name. If you don’t define an alias, the name used will be the container name. Be careful with the container name, because it is not the one defined with container_name, but instead it is the section name.
    To access the other container you can use http://alias:port, using the port defined in the other container. For instance, from the nginx container, the Django server can be accessed with http://django:32771.

    Dockerfile

    As it was said in the previous section, you can create your own image when defining the docker-compose.yml. In order to do this, you should define a Dockerfile.
    A Dockerfile defines the rules for the creation of the image from an existing one by writing the changes that will be made. To make things easier let's start with an example:
    FROM python:2.7
    
    # Django and gunicorn installation
    COPY ./app/requirements.txt /app/requirements.txt
    RUN pip install -r /app/requirements.txt
    
    EXPOSE 8000
    
    #Gunicorn start
    CMD /usr/local/bin/gunicorn -b 0.0.0.0:8000 --pythonpath /app app.wsgi:application
    
    By defining FROM, you indicate which is the base image that will be used. In this case, Django will be installed, so the base image is from the official Python repository.
    When executed, every line makes changes to the new copy of the image and create a new layer to it, starting always with the FROM clause.
    Another useful command is COPY, which allows you to move static data to the image. It is often used to copy data needed for the installation of the packages, for example, the requirements.txt which is needed to install the Django dependencies. Please note that the files copied this way are not linked to the host file system, so if you make changes to this files you must rebuild in order to reflect them on the image.
    Now, the most important part of creating an image is installing the software dependencies. To achieve this you have to use the RUN clause. This runs the command written after it and generates a new layer with the changes made by that command. On this example, the dependencies of the project are installed using RUN pip install -r /app/requirements.txt.
    Finally, it is needed to add a CMD clause, which indicates the command that will be executed every time a container that uses the images starts. In this case, it is needed that the supervisor service starts with the container. Please note that there can be only one CMD command defined if you define more than one Docker will execute the last one.
    In this case, the image needs access to the container 8000 port to run the gunicorn server. To allow the access you need to define an EXPOSE clause, which automatically publishes the ports written, separated by a blank space.
    You can check a sample project here: Docker example. Here you have all the configurations files ready to deploy a Django server with nginx and gunicorn.