Wine cellar in Armenia, 2014.

Django and Celery - demo application, part III: deployment.

Introduction

The time has come, when the application we created and developed is ready for deployment. In this post, we are going to show a quick way of setting it to “production” using:

• RabbitMQ as a message broker,
• Gunicorn for running of the application,
• Supervisor for monitoring of both Django and Celery parts, and
• Nginx to be our proxy server.

The deployment stategy is largely inspired by this work. However, it is slighlty more tuned towards the author’s own needs for this project.

Creating space

Linux permissions

The space for our application will be a designated Linux user with non-sudo privilages, sharing permissions through devapps group.

This series of commands sets up the user space for the project with all required permissions.

Virtual environment

As mentioned at the end of the earier posts, we use virtual environment for keeping track of all package dependencies of each project. Once again, we need to install the virtual environment - this time on the server. For that, we will switch to the hello_celery user.

Porting the project

Last time, we have committed our project to github. Having prepared the space for it now, we can get it by cloning the repository, and use the requirements.txt file to recreate our environment.

The project is now stored under /devapps/hello_celery/hello_celery.

Updating settings

There are two things we must not forget when moving the project towards production:

• We will use another IP address or domain name, other than 127.0.0.1.
• We do not want DEBUG = True, as this will expose sensitive information about the application to the outside world.

For this reason settings.py needs to be updated, with the proper domain name (or IP address) updated, and DEBUG set to False.

RabbitMQ

RabbitMQ should be installed on Ubuntu by default. Still, it could make sense to update it to the newest version.

Sometimes, rabbitMQ may refuse to work, which happens if, for example, the server has been operating under a different IP address before. If this could be the case, inspect the /etc/hosts file for consistency, and execute sudo /etc/init.d/rabbitmq-server restart.

Gunicorn

The project is now ready to be served by proper tools. As the first tool, we will configure Gunicorn. Assuming we are still logged in as hello_celery user and operate under virtual environment, we can install Gunicorn through pip.

To test it, we should bind it through a wsgi.py script in the following way.

Here, 192.168.xxx.yyy:pppp refer to a local IP address and a designated port we can use in our LAN for to test the application. At this stage, the Celery-related part will not function though.

Daemonizing Gunicorn

For to daemenize Gunicorn, we can set up a script (like in here) and locate it under /devapps/hello_celery/bin/gunicorn_start.

t this point, the script should be able to run the appliction when executed as user example. Note that this file needs to be added permission to execute.

Supervisor

Remember earier, we had to use two terminal windowns to run the project: one for the django application and the other for Celery. Here, we are going to use Supervisor to monitor both of them, and let the operate in the background.

Configuration

First, we must install Supervisor if not already installed on our machine.

Once installed, it is configured through a set of conf files that are speific to processes ran. Both of the scripts should reside under /etc/supervisor/conf.d/.

Launching

Once both files are saved, execute:

Make sure that /devapps/hello_celery/logs directory exists. If necessary, use touch command to create empty files for hello_celery.log and hello_celery-celery.log files.

Additional useful commands include replacing the word start with restart, stop and status. Also, in case of possible errors, the logs can be inspected using tail -f /devapps/hello_celery/logs/<logfile>.

Nginx

The last piece of the puzzle is the proxy sevrer. Here, we use Nginx.

If the server may is, for some reason, not set to listen at usual port 80, the defualt configurations must be updated under /etc/nginx/sites-available/default, in which line listen 80; should be replaced with the correct port (e.g. listen 1234). When updated, the service can be initiated by making a symbolic link to sites-enabled and executing the following:

Configuration

To configure Nginx for to serve our application, we define settings in /etc/nginx/sites-available/hello_celery.

Launching

Finally, we create another symbolic link.

Now, the application should be running on a server.

Final Words

In this series of three posts, we have come a long way. We created an application from scratch and used Celery as an anynchroneous task scheduler and an AJAX-based mechanism for monitoring the progress. Finally, we have also presented a quick way towards deployment in few steps.

Obviously, Celery offers many more ways of scheduling the tasks, just as much as AJAX APIs offers many more options for handling the requests. The core part is, however, there and it is up to your needs and imagination of what you will use it for.

Hey! Do you mind helping me out?

It's been 4 years since I launched this blog. Now, I would like to bring it to the next level. I want to record some screencast tutorial videos on the very topics that brought you here!

If you want more of the stuff, you will help me greatly by filling out a survey I have prepared for you. By clicking below, you will be redirected to Google Forms with a few questions. Please, answer them. They won't take more than 5 minutes and I do not collect any personal data.

Thank you! I appreciate it.