Starting with Docker

What is Docker?
Docker is a service that allows you to pack everything an application’s server needs for deployment to ensure that it always runs in the same environment, regardless of the OS where it is installed. 

How does it work?

The data is stored in “containers” which runs directly on the kernel of the Docker engine, making it more efficient than a normal virtual machine. On a container, you can save everything that can be installed on a server, such as code, system tools, and services, databases, libraries, etc.
A container has two main parts: the image and the writable content. The last one is the content shared with the host OS, which is directly linked to the container file system. The image is where Docker stores the applications, services, and libraries that are to be installed on the container.
This images are created from a Debian OS and can be modified to suit the application needs. Also, you can download different images from the official repositories including services like Nginx, Python, MySQL, mongo, etc., and you can even create images and upload them to the community, or to a private DockerHub repository.

Basic commands


First of all, you need to install docker on your OS, you can follow the official guides here: Docker installation guide.
Let’s start with something easy. Try running the following command:
docker run -d -P --name nginx_container nginx
This command creates a new container from the nginx official container. The option -d runs the container as a daemon. The -Pparameter publish the ports of the container, which means Docker binds the container ports to the host OS, in this case, the ports 32771 and 32770 are bound to the container ports 80 and 433 respectively. Using --name you can give the container a custom name.
Now you have a nginx server running on a container. You can check this by opening your browser on is the default IP address of the Docker engine and the port bound to the container port 80. You can also run the following command to see all the containers:
docker ps -a
Here you can see the container id, image, last command executed, creation time, status ports and the container name.
Now, think that you need to modify the nginx configuration to, for example, create a proxy. In order to do this you have to enter the nginx container, which can be done by running the following command:
docker exec -it CONTAINER_ID bash
These commands run a bash terminal on the container, allowing you to modify anything on it. The CONTAINER_ID is the id you see when you run the docker ps command, but don’t worry the only thing you need to remember is the first character of the string.
But there is a catch, if you make changes to the container it won’t be here the next time you create your container. Although this is useful for testing configurations.
In order to make changes to the image that you want to deploy you will need Docker compose explained in the next section.
After making changes to the nginx configuration, you need to restart the service for the changes to be noticed. To restart a service you have to run this command:
docker restart CONTAINER_NAME
Also, if you wish to just stop the container you may change restart to stop. Then, to start it again just running the command with start will do the work.
Finally, if you want to delete a container, you can run the following command:
docker remove CONTAINER_NAME

Docker compose

Docker compose is a tool that lets you generate containers and its images automatically, and also bounds port or link files directly to it.
Before you continue, you should install docker-compose. The link to the installation guide can be found below and, if you are running docker on a Mac or Windows OS, you may install the Docker set of tools:
Docker toolbox comes with docker-compose and other useful applications, for example, kitematic, which is a tool for managing containers visually.
Docker compose requires a configuration file called docker-compose.yml, where you define the creation rules of the container. To explain this let’s start with an example file:
  container_name: nginx
  image: nginx
    - "80:80"
    # Nginx configuration
    - ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
    - django
  container_name: django
  build: .
    - "32771:8000"
    # Django app
    - ./app:/app
First of all please note that there are two sections, nginx, and Django, each one representing a different container. Any number of containers may be defined here, and you can give each one a custom name, with the only restriction being that that name can’t also be a property.
Inside every section, you can see the property container_name which is the name of the composed container. Be careful with this, because this is not the name you refer to in this file.
After that, there is either the image or build property. These lines tell the composer where to get the image for the container. You can define only one of them for each container. In order to get an image from a docker repository, you have to define image: docker_repository_name. If you want to create your own image, you have to define build: /path/to/image, which will run docker build and generate the custom image. This will be explained in the next section.
The next property is ports, where you define the ports bound to the container, the syntax is: -”host_port:container_port”. Following the previous example now to access the nginx container 80 port you can use the ip:, assuming that the docker machine is using the default ip address.
After that, the volumes section is defined. Here the folders and files linked to the container file system are listed. You can define them by writing: -./path/to/host/folder:/path/to/container/folder. Here, /path/to/host should be a relative path to the linked folder or file, starting from the location of the docker-compose.yml, and /path/to/container/folder is the absolute path where it will be located inside the container.
Next, on the nginx section, an additional field called links is defined, where you can define the access from one container to another. Using the syntax: -container_name:alias, you can create an entry in /etc/hosts within the container with the alias name. If you don’t define an alias, the name used will be the container name. Be careful with the container name, because it is not the one defined with container_name, but instead it is the section name.
To access the other container you can use http://alias:port, using the port defined in the other container. For instance, from the nginx container, the Django server can be accessed with http://django:32771.


As it was said in the previous section, you can create your own image when defining the docker-compose.yml. In order to do this, you should define a Dockerfile.
A Dockerfile defines the rules for the creation of the image from an existing one by writing the changes that will be made. To make things easier let's start with an example:
FROM python:2.7

# Django and gunicorn installation
COPY ./app/requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt


#Gunicorn start
CMD /usr/local/bin/gunicorn -b --pythonpath /app app.wsgi:application
By defining FROM, you indicate which is the base image that will be used. In this case, Django will be installed, so the base image is from the official Python repository.
When executed, every line makes changes to the new copy of the image and create a new layer to it, starting always with the FROM clause.
Another useful command is COPY, which allows you to move static data to the image. It is often used to copy data needed for the installation of the packages, for example, the requirements.txt which is needed to install the Django dependencies. Please note that the files copied this way are not linked to the host file system, so if you make changes to this files you must rebuild in order to reflect them on the image.
Now, the most important part of creating an image is installing the software dependencies. To achieve this you have to use the RUN clause. This runs the command written after it and generates a new layer with the changes made by that command. On this example, the dependencies of the project are installed using RUN pip install -r /app/requirements.txt.
Finally, it is needed to add a CMD clause, which indicates the command that will be executed every time a container that uses the images starts. In this case, it is needed that the supervisor service starts with the container. Please note that there can be only one CMD command defined if you define more than one Docker will execute the last one.
In this case, the image needs access to the container 8000 port to run the gunicorn server. To allow the access you need to define an EXPOSE clause, which automatically publishes the ports written, separated by a blank space.
You can check a sample project here: Docker example. Here you have all the configurations files ready to deploy a Django server with nginx and gunicorn.


Post a Comment