Showing posts with label Python. Show all posts
Showing posts with label Python. Show all posts

Thursday, July 20, 2017

Simple SMS response for your Django App using Twilo




Twilio allows software developers to programmatically make and receive phone calls and send and receive text messages using its web service APIs.
On this guide we will set up our Django project to send text messages using Twilio.

We will assume that the Django project is already created.

Let's get to work!

1. Create a Twilio account at the Twilio Official page


2. Get a phone number to send messages. To do that we will login with our Twilio account and get one from the console section (you can get the first one for free).


3. Install the twilio-python library on our Django project by typing the following command on the terminal:

$ Pip install twilio
Almost there...

The last thing we have to do is include the code in the view that will be in charge of sending the message:

from twilio.rest import Client
account = "ACXXXXXXXXXXXXXXXXX"
token = "YYYYYYYYYYYYYYYYYY"
client = Client(account, token)

message = client.messages.create(to="+12316851234", from_="+15555555555",
                                 body="Hello there!")
Things to take into account:


  • Your account SID and token are written on your account’s Console section. You should save those fields on your environment variables so that you won’t have that sensible information exposed in your code.

  • You may have to give permissions to send SMS outside the US. Check if the country where you are trying to send a text message is enabled on the SMS Geographic permissions section.

  • The from_ field must be filled up with the number you got for your twilio account.

And that’s it!!
Just call that view from a URL and there you go. Start sending messages!



References:

    Friday, July 7, 2017

    Dockerize your Django Web Application


    Docker platform is becoming more and more popular thanks to it’s ability to create, deploy, and run applications easily by using containers.  

    In this guide we will talk about how to dockerize our django web application using nginx, gunicorn and postgresql.
    If you are not deeply familiarized with docker, check out this Docker Basic Guide.

    Prerequisites

    • Python 2.7 or 3.x
    • Install docker-compose (we can do that by running pip install docker-compose).

    Create an empty directory

    On this directory (we will name it "myproject") we will put all the required files to set up our docker configuration properly.

    Create a subdirectory named web

    This folder will contain our django project, let's name it mydjangoproject.

    Place the requirements.txt file inside the Web folder

    If you don't have it already, create it. It should be placed at the same level as mydjangoproject and contain all your project dependencies, including at least these three:

    Django==1.11.2
    gunicorn==19.7.1
    psycopg2==2.6
    Also inside "web", make a file named DockerFile and add the following lines:
    FROM alpine
    
    # Install required packages
    RUN apk update
    RUN apk upgrade
    RUN apk add --update python python-dev py-pip postgresql-client postgresql-dev build-base gettext
    
    # Initialize
    RUN mkdir -p /data/web
    COPY . /data/web/
    WORKDIR /data/web/
    
    #Setup
    RUN pip install --upgrade pip
    RUN pip install -r requirements.txt
    
    #Prepare
    RUN mkdir -p mydjangoproject/static/admin
    By adding this lines we are setting up our container by installing the necessary packages such as pip and postgresql, adding our project to the container and installing the required dependencies.

    Create a file called run_web.sh and place it inside the "web" folder

    This file contains a script that will be executed when the container starts. We will add the following lines to it:

    #!/bin/sh
    
    python manage.py migrate                  # Apply database migrations
    python manage.py collectstatic --noinput  # Collect static files
    
    # Start Gunicorn
    exec gunicorn myproject.wsgi:application \
      --bind 0.0.0.0:8008 \
      "$@"

    Here we can see how we apply the migrations, collect the static files and start our gunicorn server on the localhost at the port 8008. Don’t forget to add the host '0.0.0.0' to your allowed hosts in your Django settings.

    Go back to myproject directory and create a file named docker-compose.yml

    This file is the one that will contain the configuration of all the services and how they interact with each other. Let’s add the following code:

    version: '2'
    services:
      # Postgres database
      postgres:
        restart: always
        image: postgres:latest
        volumes:
          - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
        env_file: ./env
        expose:
          - "5432"
      # Django web service
      web:
        build: ./web
        ports:
          - "3000:8008"
        env_file: ./env
        links:
          - postgres
        depends_on:
          - postgres
        volumes:
          - ./web/mydjangoproject/static:/static
        working_dir: /data/web/mydjangoproject/
        command: /data/web/run_web.sh
      nginx:
        restart: always
        build: ./nginx/
        ports:
        - "8001:8001"
        volumes_from:
        - web
        links:
        - web
      We can see here three services: “nginx”, “web” and “postgresql”. “nginx” is linked to “web” and “web” to “postgresql” in order to make them reachable between each other. Also it’s specified that “postgres” service will use the latest version of postgres image from dockerhub and both web and nginx are built using their dockerfiles. Finally, we can see that the web service will run the script written in run_web.sh when the container gets started.

      All commands can be found on the docker official page.

      Nginx configuration:

      To configure nginx we are going to create a directory inside myproject called nginx and create a configuration file called default.conf inside it. Then we will write the following lines:

      server {
         listen 8001;
         charset utf-8;
         location /static/ {
             root /;
         }
         location / {
             proxy_pass http://web:8008;
             proxy_set_header Host $host;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         }
      }
      On this configuration we specify the path of our static files (we have to write the same one that is specified on our settings.py file) and also we will set up the reverse proxy to redirect nginx to our gunicorn server. If you can’t see your static folder with your static files, run the command 'python manage.py collectstatic' and it will automatically collect the statics for you on the root of the project.
      Also, as nginx is built and not taken from a dockerhub image, we will create a file named Dockerfile inside our nginx directory and write the following code.
      FROM nginx:alpine
      RUN apk update
      RUN apk upgrade
      RUN apk add --update curl
      ADD default.conf /etc/nginx/conf.d/default.conf

      Finally, set up the database configuration

      This configuration will be done on the settings file from our Django project. The database section should look like this:
      
      
      DATABASES = {  
          'default': {
             'ENGINE': 'django.db.backends.postgresql_psycopg2',
             'NAME': 'postgres',
             'USER': 'postgres',
             'HOST': 'postgres',
             'PORT': 5432,
          }
      }

      We have linked the service "web" and "postgres" on the docker-compose.yml file already. That means that the "web" service will be able to establish connection with "postgres". To do that, take into account that database settings have to use the host name "postgres" to reach the container.
      The last step is to create a file named 'env' inside 'myproject' folder with the database configuration. This file is referenced in our “env_file” variable on the docker-compose file.
      We should write the following lines on it:

      
      
      POSTGRES_DB=socialNetwork
      POSTGRES_USER=root
      DB_SERVICE=postgres
      DB_PORT=5432
      POSTGRES_PASSWORD=password
      Environment variables can change depending on the database configuration. All the options can be found on the docker’s environment variables section for postgres.

      It’s done!

      Everyting should be set up now. We only need to go into 'myproject' folder (folder where docker-compose.yml is located) and run 'docker-compose build' to build the containers and 'docker-compose up' to start the containers.

      Then if you go to localhost:8001 you should see the django project working.


      Thursday, July 6, 2017

      Django Isolated Transactions

      Introduction

      At some point in your life, you're going to need to use serializable transactions, it happens to everyone. Keep reading to learn how to do it using Django's awesome ORM.




      Before starting let's make sure we all understand the following concepts:

      Transactions - Database Engine

      (*1) A transaction is a sequence of operations performed as a single logical unit of work. A logical unit of work must exhibit four properties, called the atomicity, consistency, isolation, and durability (ACID) properties, to qualify as a transaction.

      Atomicity

      A transaction must be an atomic unit of work; either all of its data modifications are performed, or none of them is performed.

      Consistency

      When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction's modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.

      Isolation

      Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either recognizes data in the state it was in before another concurrent transaction modified it, or it recognizes the data after the second transaction has completed, but it does not recognize an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.

      Durability

      After a transaction has completed, its effects are permanently in place in the system. The modifications persist even in the event of a system failure.

      TL;DR: We intend to guarantee four things; that all the changes in the transactions are made or none of them; that all the database structures remain consistent with its definition; that we define how the database will handle concurrent transactions over the same data; once the changes are committed, they'll stay there despite a system failure.

      Let's now define a common scenario where we could use this.

      Scenario

      Imagine that you work at Innuy and you were told by our awesome CTO that we need to make some changes to the website. Apparently, the popularity of Innuy is growing so much that we'll start selling shirts with the company logo. For that, we'll need to add a view on our website that allows users to purchase shirts and the necessary backend for it to work.

      As we are focusing on database transactions, I'll only show backend code, I'll let you imagine the front end.

      Analysis

      We'll be selling a finite item, so we know we have a limited stock. Please think about how the flow (simplified) of the app would be:

      1 - User accesses the store
      2 - User selects shirt and size
      3 - User confirms the purchase
      4 - User adds payment information
      5 - Order is created - In this step we'll reduce the available stock of the selected shirt in the selected size by 1, as we know it's it has already been sold.

      As the attentive reader might have realized we just created ourselves a problem there. What would happen if two different clients make a purchase at exactly the same time when the stock of the shirt is only one?

      The answer is, if you weren't careful enough with the implementation, you probably will be sending an apologize email and funds return.

      If you didn't realize the issue please check the following diagram: You will quickly notice that at the end of this flow, the stock for the shirt is minus one. This happens if we don't have any control over different database connections reading or modifying the same data.



      As we are awesome developers we'll build a solution that avoids this behavior.

      Our tools

      Django provides, out of the box, a few tools to solve this situation:

      transaction.atomic() (*2)

      Grants us the ability to write code blocks where database operations are atomic. This means that Django guarantees us that one of two things will happen: or all operations are executed, or none of them are. One important thing to notice about transaction.atomic is that the auto-commit is disabled because the commit will be called after finishing the whole block.

      select_for_update() (*3)

      Returns a queryset that will lock rows until the end of the transaction, generating a SELECT ... FOR UPDATE SQL statement on supported databases. Must be called from within a transaction.atomic block. Check out the docs for more information about the extra options this method supports.

      An important point to consider when using transactions in Django when developing the test cases for our app instead of extending the Django tests from TestCase, you should extend from TransactionTestCase. (*4)

      MySQL

      For this example, we'll be using MySQL so let's look into how transactions are handled.

      First of all, we must select the transaction isolation level. As Django does not provide an interface for settings the isolation level for a specific transaction, we'll have to set it up on the database (if this is not an option for you, you can still achieve this using raw SQL, but that's not the goal of this post).

      MySQL provides a few options to set the database isolation level (how the database will behave when a SELECT ... FOR UPDATE statement is executed). In this case, we'll use the isolation level SERIALIZABLE which will block read, write operations after a transaction has called the SELECT FOR UPDATE statement. (*5)(*6)

      Implementation

      This representation of the models will make easy for us to represent the defined scenario:

      
      from django.db import models
      
      class Item(models.Model):
          """
          Ultra simple item model just intended for demo purposes
          """
          CATEGORIES = (
              ('books', 'Books'),
              ('apparel', 'Apparel'),
              ('electronics', 'Electronics')
          )
          image = models.ImageField()
          description = models.TextField()
          stock = models.IntegerField(default=0)
          price = models.PositiveIntegerField(default=0)
          category = models.CharField(max_length=12, default='apparel', choices=CATEGORIES)
      
      
      class ItemOrder(models.Model):
          """
          Ultra simple order model just intended for demo purposes
          """
          ORDER_STATUS = (
              ('under_review', 'Under review'),
              ('packaged', 'Packaged'),
              ('shipped', 'Shipped'),
              ('delivered', 'Delivered')
          )
          item = models.OneToOneField(Item)
          amount = models.PositiveIntegerField(default=1)
          creation_date = models.DateTimeField(auto_created=True)
          status = models.CharField(max_length=12, default='under_review', choices=ORDER_STATUS)
      
      
      We have an Item model, representing any product we sell. The stock indicates how many of the item we have physically, while the other fields are only for orientation purpose in this example. The ItemOrder will represent a confirmed purchase on an Item. As you can see it contains a state field that will allow Innuy's staff to know what to do with the order; a reference to the sold item, and other common fields (we don't really need for the example).

      Now that our models are ready, let's create a method for creating a new purchase order. We'll add it as a static method to the ItemOrder model.

      
      from django.db import transaction
      
      @staticmethod
      def create_order(item_id, how_many):
          """
          Creates an order
          :param item_id: The id of the item to purchase
          :param how_many: The amount of items
          :return: The created order or None
          """
          order = None
          try:
              # start atomic block
              with transaction.atomic():
                  # select for update will lock de item so we can work with it 
                  item = Item.objects.select_for_update().get(id=item_id)
                  if item.stock - how_many >= 0:
                      order = ItemOrder.objects.create(item=item, amount=how_many)
                      item.stock -= how_many
                      item.save()
                      logger.info('Order created for item %s, amount %s.' % (item_id, how_many))
          except Item.DoesNotExist:
              logger.exception('Item with id %s does not exists' % item_id)
          except Exception as e:
              logger.exception('Unexpected error: e' % e)
          return order
      
      
      As you see the create_order method does the trick. It locks the Item until we make every necessary operation. When using transaction.atomic it's important to remember that the try/except block is used outside the atomic block, if an exception is triggered inside the atomic block, the transaction will be rolled back, and the exception will be raised again to be caught by our outer try/except. 

      Now, let's create a view to serve this service:

      
      import json
      from django.http import JsonResponse
      from .models import ItemOrder # provided the models file is in the same package.
      
      def create_order(request):
          """
          Creates an order after the payment has been processed
          :param request: The django request object
          """
          item_id = request.body.get('item_id')
          how_many = request.body.get('how_many')
          order = ItemOrder.create_order(item_id, how_many)
          result = {'order_status': 'Under review'} if order else {'order_status': 'Failed'}
          result = json.dumps(result)
          return JsonResponse(result)
      Now we are all set up! We have developed the necessary models and logic to provide an order creation service that guarantees us the advantages of ACID transactions.

      You have acquired a new skill, use it wisely :)

      References


      *1 - https://technet.microsoft.com/en-us/library/ms190612(v=sql.105).aspx

      *2 - https://docs.djangoproject.com/en/1.11/topics/db/transactions/#s-controlling-transactions-explicitly

      *3 - https://docs.djangoproject.com/en/1.11/ref/models/querysets/#s-select-for-update

      *4 - https://docs.djangoproject.com/en/1.11/topics/db/transactions/#s-use-in-tests

      *5 - Isolation level MySQL: https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html

      *6 - Set isolation level MySQL:https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html

      Monday, July 3, 2017

      Python MongoDB

      What is Mongo?

      Mongo is an open-source non-relational database service, written in C++. It is the ideal tool for backend services that need to save information that needs little processing quickly. These characteristics make it ideal for mobile and social networks backend services.

      PyMongo

      PyMongo is an API used for managing Mongo databases in Python. It is really easy to learn, and straightforward to use. Let’s start with the basics; the first thing you need is a database and a collection to start.
      For all of the examples, the default “test” database will be used, and a collection called “People”. If you do not know how to create a collection, you can check the Mongo documentation.

      Connecting to the db client

      After that, you need to install PyMongo on your system. You can do it simply by using pip or adding it to your requirements.txt. In this case, as it is an example, it will be installed via pip with the following command:
      python -m pip install pymongo
      Then, in your project, you have to create a client to connect to the database. In order to create it, you must know the IP and port of the Mongo database. By default the Mongod port is the 27017 and, in this case, the Mongod is installed on the same computer of the Python project, so the example uses localhost as the IP address. Here you have a really simple example script:
      HOST = "localhost"
      PORT = "27017"
      
      db = MongoClient("mongodb://" + HOST + ":" + PORT).test
      On this script, a global “db” variable is defined as the test database on the local computer. Also, if you have user and password in your database, you can define it on the client by adding it at the beginning of the URI: user:password@mongodb://...

      Populating the database

      After the client is defined, we will need some data in the database to start with the API testing. You can insert data using the insert_one method from the mongo collection. For example, to add a new person to the People collection you can run the following command:
      db.People.insert_one({
          "name": {
              "first_name": "Alice",
              "last_name": "Smith"
          },
          "address": {
              "street": "5th Avenue",
              "building": "269",
              "coord": {"type": "Point", "coordinates": [-56.137, -34.901]}
          }
      })
      Also if want to insert multiple documents at once, a better and more efficient way to insert data is using bulk operations. You can initialize bulk objects using the collection methods initialize_unordered_bulk_op or initialize_ordered_bulk_op. On this objects, you can insert, modify or delete data without changing the database, and then execute it to reflect those changes all at once. Here you can find an example:
      bulk = db.People.initialize_unordered_bulk_op()
      
      bulk.insert({
          # Document to be inserted
      })
      
      bulk.execute()

      Creating queries

      In PyMongo, you can define queries the same way as you do in Mongo. You can use the find method from a collection to create simple queries or the aggregate method for more complex ones. Here you have a query that returns the documents whose last names are “Smith”:
      cursor = db.People.find({"name.last_name": "Smith"})
      Then you can iterate over the returned documents by using:
      for document in cursor:
          # Manipulation of the documents
      The documents are handled as JSONs objects so you can retrieve its data using get or the brackets operators.

      Updating and deleting data

      In order to update, delete or replace data you should use one of the six collections methods for database modification. Those methods are the following:
      1. update_one
      2. update_many
      3. replace_one
      4. replace_many
      5. delete_one
      6. delete_many
      All of them receive as their first parameter a query for the elements that will be modified. This query can use any of the operators as the find methods. As you can see on the method name, the ones ending on _one modify the first document found for that query, and the ones ending in _many change all of them.
      Then, the first four methods receive as their second parameter the object to update or replace. The update methods require the $set operator, while the other can receive any JSON object.
      Here you can see an example for the local database:
      result = db.People.update_many({"name.last_name": "Smith"},
                                     {
                                         "$set": {
                                             "name.last_name": "Johnson"
                                         }
                                     })
      As you can see here, every person whose last name is Smith will be updated to have their last name be Johnson instead.
      Also, the replace methods can receive an optional parameter named upsert. When this value is true, if no document was found by the query, then it inserts the object. By default this value is false.

      Creating and using indexes

      Indexes are used to improve the speed of queries, or for special kind of queries, like geospatial queries. Creating indexes is really simple. You have to use the create_index collection method, which receives a list of all the indexes you want to create.
      The indexes are maps of values, where the key is the name of the attribute and the value the kind of index which will be created. The index can be any of the listed here:
      1. ASCENDING
      2. DESCENDING
      3. GEO2D (“2d” - 2-dimensional geospatial index)
      4. GEOSPHERE (“2dsphere” - spherical geospatial index)
      5. HASHED
      6. TEXT
      Here you have an example that creates a spherical index and then makes a geospatial query:
      db.People.create_index([
          ("address.coord", GEOSPHERE)
      ])
      
      longitude = -56.134
      latitude = -34.9
      
      distance = 304.8  # 1000ft in mts
      
      cursor = db.People.aggregate([{
          "$geoNear": {
              "near": {"type": "Point", "coordinates": [longitude, latitude]},
              "spherical": True,
              "distanceField": "distance",
              "maxDistance": distance
          }
      }])
      The first command creates the geospatial index by the address.coord attribute, then it creates a query that finds the people that are within 1000ft of the defined position.
      Congratulations! Now you know how to manipulate Mongo databases with Python. If you want to see a sample project you can check ours here.