Serverless Django


In the last few years, since around 2016, there’s a concept that’s been gaining popularity. This concept is known as “Serverless Infrastructure”.

Having a serverless infrastructure allows you to have a project running live, but without any permanent infrastructure. This means that when no requests are arriving for the server, there will be no servers actively running. When a request is received, the server will turn on, answer the request, and then shut down.

This serverless approach has one big benefit: server resources will only be in use while they are required, leaving in the past the problem of idle servers.


Nowadays there are different cloud providers that support this kind of infrastructure. Some of the most popular are:

This tutorial will focus on AWS as it is one of the most widely used cloud providers in the world.

AWS provides serverless capabilities through one of their services called Lambda. Quoting AWS Lambda description:

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Quite cool, right? Especially these two aspects:

  • Takes care of running and scaling your code (no need to worry about the horizontal scale, Lambda has that covered).
  • Only pay for what you use (save money by only paying for actual usage, no usage, no charge).


Regarding deploying Django apps with a serverless infrastructure, there are two big players that can help us:

Both frameworks are up to the task, but 'Zappa' is focused on serverless Python, while 'Serverless' allows you to deploy code of several programming languages like Node, Python, Java, PHP, among others.

For this tutorial, we will use Zappa for its simplicity and because it’s python focused.

Getting started

First, we need to make sure we have an AWS account with root access. If you don’t have an account with that level of permissions you will need to create an IAM user with programmatic access that has permissions to the following services:

  • Cloud Formation
  • IAM
  • Lambda
  • S3
  • API Gateway
  • CloudWatch
  • Other services you would want to use

There is currently an open task in Zappa for a proper definition of the permissions needed. In the meanwhile, there are two approaches you can take: be flexible with the permissions or do some trial and error until reaching the minimal permissions you need for your app. It’s recommended to remove the IAM permission after Zappa creates the user the first time.

After the user is ready, make sure you have the proper awscli configuration, you can find quite a good guide on the Serverless page.

The Project

We will be deploying a simple project that was developed for this tutorial. The project itself is not really important, so please focus on learning how to add Zappa support to an existing Django project. The project can be found here.

First of all, let’s install the project:

#git clone #pip install -r requirements/dev.txt

** You will have to create a bucket to locate your sqlite database, then update the database settings

Then let’s create a configuration file using the following command:

# zappa init

After running that command you'll be asked a few questions. I recommend you leave everything as default if it’s your first time.

A file called “zappa_settings.json” should have been created. It will look something like this:

"dev": { "aws_region": "us-east-1", "django_settings": "", "profile_name": "default", "project_name": "innuy-lambda", "runtime": "python3.6", "s3_bucket": "bucketname" } }

Zappa has many additional configuration settings. If you need something more specific, please take a look at the documentation for more advanced use cases.

Let’s now run the following command to upload our code to Lambda:

# zappa deploy

If the deployment process succeeds you should see an url like this:

If you add “/admin” to the url the result is:

You should be taken to the django admin… but static files are missing!

We can solve this by setting a few variables in the dev settings file (or you can do it by env variables in zappa). In this case just set the bucket where you want the statics files to live (if you don’t have a bucket, you can create one with a wizard in AWS S3):

AWS_STORAGE_BUCKET_NAME = 'innuylambda-static'

Then update the code in lambda using:

# zappa update

Now lets collect the static files:

# zappa manage dev "collectstatic --noinput"

You should now see the admin perfectly. You can try creating a user by running the project locally and running the createsuperuser command from

Your app should be running completely serverless!!

To undeploy run the following command:

# zappa undeploy

And everything is off (the buckets are still there though).

Deploying a Django project in Lambda using Zappa is quite easy. 

I recommend you to try it out!

Installation of Docker Swarm using Ansible


In this tutorial we will cover how to install Docker Swarm in 2 nodes, 1 master, and 1 worker, using Ansible. For this, we will briefly cover how Ansible itself works and all the necessary steps to make an automated installation of Docker Swarm.

What is Ansible?

Ansible is an open-source automation engine that automates software provisioning, configuration management, and application deployment.

As with most configuration management software, Ansible has two types of servers: controlling machines and nodes. First, there is a single controlling machine which is where orchestration begins. Nodes are managed by a controlling machine over SSH. The controlling machine describes the location of nodes through its inventory.

To orchestrate nodes, Ansible deploys modules to nodes over SSH. Modules are temporarily stored in the nodes and communicate with the controlling machine through a JSON protocol over the standard output. When Ansible is not managing nodes, it does not consume resources because no daemons or programs are executing for Ansible in the background.

What is Docker Swarm?

Docker Swarm is a native Docker application commonly used to manage clusters, which allows orchestrating containers between different nodes easily.


The first thing needed when installing Ansible is creating the inventory file where we will declare all the hosts where we are going to install the services.

For this tutorial we are going to make the installation in AWS using 2 EC2, each one with CentOS7, with the following configuration:

First, we are going to create the directory to store the project.

# mkdir ansibleSwarm

Now create the ansible and inventory directories.

# mkdir -p ansibleSwarm/ansible/inventory

Inside the inventory directory, we are going to create the “hosts” file where we will write the connection data of our servers.

# Hosts manager ansible_host= ansible_ssh_user=centos ansible_ssh_private_key_file=/home/rodrigo/Development/innuy/AnsibleSwarm/ansible/private/DockerSwarm.pem worker1 ansible_host= ansible_ssh_user=centos ansible_ssh_private_key_file=/home/rodrigo/Development/innuy/AnsibleSwarm/ansible/private/DockerSwarm.pem [swarm-managers] manager [swarm-workers] worker1 [swarm:children] swarm-managers swarm-workers

Here is what each of these configuration lines means:
  • manager - logical name of the server
  • ansible_host - IP or DNS name of the server
  • ansible_ssh_user - Username which ansible is going to use to connect to the server using SSH.
  • ansible_ssh_private_key_file - Path of the ssh key of the AWS server.

After those, in the next lines we are creating the logical groups [swarm-managers] and [swarm-workers], which contain the servers, and then the group [swarm] which contains the groups [swarm-managers] and [swarm-workers].

Role for the swarm group (all the hosts)

1- Common role:

Create the roles folder to store all the playbooks.

# mkdir -p ansibleSwarm/ansible/roles

Now let’s create the common and tasks directory, where we are going to have the main.yml file which has all the necessary steps for the installation.

These roles are going to install the packets needed to work in all the nodes (managers and workers).

Let’s create the main.yml file with our favorite editor inside the ansibleSwarm/ansible/roles/common/tasks with the following content.

--- - name: Clean yum become: true shell: yum clean all tags: [common] - name: upgrade all packages become: true yum: name=* state=latest tags: [common] - name: Disable selinux selinux: state=disabled become: true tags: [common] - name: install unzip become: true yum: name=unzip tags: [common] - name: Enable the EPEL repository definitions. become: true yum: pkg=epel-release tags: [common] - name: Install python setup tools become: true yum: name=python-setuptools tags: [common] - name: Install Pypi become: true easy_install: name=pip tags: [common] - name: install git become: true Yum: name: git state: present tags: [common]

2- Create the docker role

Let’s create the docker role
# mkdir -p ansibleSwarm/ansible/roles/docker/tasks

This ansible role allows the installation and configuration of the last version of docker for the community edition.

Let’s create the main.yml with our favorite editor.

--- - name: Install docker-py become: true pip: name: docker-py extra_args: --ignore-installed tags: [docker] - name: Update docker repo become: true lineinfile: dest: /etc/yum.repos.d/docker.repo create: yes line: "{{ item }}" with_items: - "[dockerrepo]" - "name=Docker Repository" - "baseurl=" - "enabled=1" - "gpgcheck=1" - "gpgkey=" tags: [docker] - name: Install Docker become: true yum: pkg=docker-engine state=present tags: [docker] - name: enable sysv dockerd service become: true service: name: docker.service enabled: yes tags: [docker] - name: Start service become: true service: name: docker.service state: started tags: [docker]

Specific node manager packages

Let’s create the role docker manager.

# mkdir -p ansibleSwarm/ansible/roles/dockermanager/tasks

This role initializes the docker swarm cluster in the node manager, which in turn will provide us with the token that is necessary for the worker nodes.

Let’s create the main.yml inside this role.

--- - name: Disabling swarm become: yes command: docker swarm leave -f ignore_errors: yes tags: [dockermanager] - name: initialize swarm cluster become: yes command: docker swarm init --advertise-addr={{ swarm_iface | default('eth0') }}:2377 register: bootstrap_first_node tags: [dockermanager]

Specific node worker packages

Let’s create the dockerworker role.
mkdir -p ansibleSwarm/ansible/roles/dockermanager/tasks

This role is going to use the token of the manager to connect and create the Docker Swarm cluster.

--- - name: leaving older swarm command: docker swarm leave -f become: yes ignore_errors: yes tags: [swarm] - name: join nodes to manager command: docker swarm join --token={{ tokennode }} {{ hostmanager }} become: yes register: swarm_join_result failed_when: not "'This node is already part of a swarm' in command_result.stderr" tags: [swarm]

Now we are creating the variables directory for this role.

# mkdir -p ansibleSwarm/ansible/roles/dockerworker/vars

Inside the vars directory, we will be creating the main.yml with the following values to obtain the token of the manager. By doing this we can then use the tokennode and hostmanager variables in the tasks folder.

# vars file for dockermanager tokennode: "{{ hostvars.manager.bootstrap_first_node.stdout.split('\n')[5].split(' ')[5] }}" hostmanager: "{{ hostvars.manager.bootstrap_first_node.stdout.split('\n')[6] }}"

Launching the playbook

Once we have all the roles we only need to create a playbook with all the roles for the execution.

--- - name: Install common packages hosts: swarm roles: - common - docker - name: Configure Manager hosts: swarm-managers roles: - dockermanager - name: Configure Workers hosts: swarm-workers roles: - dockerworker

The final step is to execute all the roles we created before.

# ansible-playbook -i inventory/hosts swarm.yml

After executing the command the output should look similar to this.

root@ip-172-31-52-216:/home/rodrigo/Development/innuy/AnsibleSwarm/ansible# ansible-playbook -i inventory/hosts swarm.yml PLAY [Install common packages] ************************************************* TASK [setup] ******************************************************************* ok: [worker1] ok: [manager] TASK [common : Clean yum] ****************************************************** changed: [worker1] [WARNING]: Consider using yum module rather than running yum changed: [manager] TASK [common : upgrade all packages] ******************************************* ok: [worker1] ok: [manager] TASK [common : Disable selinux] ************************************************ changed: [manager] changed: [worker1] TASK [common : install unzip] ************************************************** ok: [worker1] ok: [manager] TASK [common : Enable the EPEL repository definitions.] ************************ ok: [worker1] ok: [manager] TASK [common : Install python setup tools] ************************************* ok: [worker1] ok: [manager] TASK [common : Install Pypi] *************************************************** ok: [worker1] ok: [manager] TASK [common : install git] **************************************************** ok: [worker1] ok: [manager] TASK [docker : Install docker-py] ********************************************** changed: [worker1] changed: [manager] TASK [docker : Update docker repo] ********************************************* ok: [worker1] => (item=[dockerrepo]) ok: [manager] => (item=[dockerrepo]) ok: [worker1] => (item=name=Docker Repository) ok: [manager] => (item=name=Docker Repository) ok: [worker1] => (item=baseurl= ok: [manager] => (item=baseurl= ok: [manager] => (item=enabled=1) ok: [worker1] => (item=enabled=1) ok: [manager] => (item=gpgcheck=1) ok: [worker1] => (item=gpgcheck=1) ok: [manager] => (item=gpgkey= ok: [worker1] => (item=gpgkey= TASK [docker : Install Docker] ************************************************* ok: [manager] ok: [worker1] TASK [docker : enable sysv dockerd service] ************************************ ok: [worker1] ok: [manager] TASK [docker : Start service] ************************************************** ok: [manager] ok: [worker1] PLAY [Configure Manager] ******************************************************* TASK [setup] ******************************************************************* ok: [manager] TASK [dockermanager : Disabling swarm] ***************************************** changed: [manager] TASK [dockermanager : initialize swarm cluster] ******************************** changed: [manager] PLAY [Configure Workers] ******************************************************* TASK [setup] ******************************************************************* ok: [worker1] TASK [dockerworker : leaving older swarm] ************************************** changed: [worker1] TASK [dockerworker : join nodes to manager] ************************************ changed: [worker1] PLAY RECAP ********************************************************************* manager : ok=17 changed=5 unreachable=0 failed=0 worker1 : ok=17 changed=5 unreachable=0 failed=0

Hope this tutorial has helped you in combining Ansible and Docker Swarm for your needs!