Django and Celery - demo application, part III: deployment.
Introduction
The time has come, when the application we created and developed is ready for deployment. In this post, we are going to show a quick way of setting it to “production” using:
- RabbitMQ as a message broker,
- Gunicorn for running of the application,
- Supervisor for monitoring of both Django and Celery parts, and
- Nginx to be our proxy server.
The deployment stategy is largely inspired by this work. However, it is slighlty more tuned towards the author’s own needs for this project.
Creating space
Linux permissions
The space for our application will be a designated Linux user with non-sudo privilages, sharing permissions through devapps
group.
1
2
3
4
5
6
sudo groupadd --system devapps
sudo useradd --system --gid devapps --shell /bin/bash --home-dir /devapps/hello_celery hello_celery
sudo mkdir -p /devapps/hello_celery
sudo chown hello_celery /devapps/hello_celery
sudo chwon -R hello_celery:users /devapps/hello_celery
sudo chmod -R g+w /devapps/hello_celery
This series of commands sets up the user space for the project with all required permissions.
Virtual environment
As mentioned at the end of the earier posts, we use virtual environment for keeping track of all package dependencies of each project. Once again, we need to install the virtual environment - this time on the server.
For that, we will switch to the hello_celery
user.
Porting the project
Last time, we have committed our project to github.
Having prepared the space for it now, we can get it by cloning the repository, and use the requirements.txt
file to recreate our environment.
The project is now stored under /devapps/hello_celery/hello_celery
.
Updating settings
There are two things we must not forget when moving the project towards production:
- We will use another IP address or domain name, other than
127.0.0.1
. - We do not want
DEBUG = True
, as this will expose sensitive information about the application to the outside world.
For this reason settings.py
needs to be updated, with the proper domain name (or IP address) updated, and DEBUG
set to False
.
RabbitMQ
RabbitMQ should be installed on Ubuntu by default. Still, it could make sense to update it to the newest version.
Sometimes, rabbitMQ may refuse to work, which happens if, for example, the server has been operating under a different IP address before.
If this could be the case, inspect the /etc/hosts
file for consistency, and execute sudo /etc/init.d/rabbitmq-server restart
.
Gunicorn
The project is now ready to be served by proper tools.
As the first tool, we will configure Gunicorn.
Assuming we are still logged in as hello_celery
user and operate under virtual environment, we can install Gunicorn through pip.
To test it, we should bind it through a wsgi.py
script in the following way.
Here, 192.168.xxx.yyy:pppp
refer to a local IP address and a designated port we can use in our LAN for to test the application.
At this stage, the Celery-related part will not function though.
Daemonizing Gunicorn
For to daemenize Gunicorn, we can set up a script (like in here) and locate it under /devapps/hello_celery/bin/gunicorn_start
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash
NAME="hello_celery"
DJANGODIR="/devapps/hello_celery/hello_celery
SOCKETFILE="/devapps/hello_celery/run/gunicorn.sock
USER=hello_celery
GROUP=devapps
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=hello_celery.settings
DJANGO_WSGI_MODULE=hello_celery.wsgi
echo "Starting $NAME as 'whoami'"
# Activate virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create 'run' directory
RUNDIR=$(dirname $SOCKETFILE(
test -d $RUNDIR || mkdir -p $RUNDIR
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--group=$GROUP \
--bind=unix:$SOCKETFILE \
--log-level=debug \
--log-file=-
t this point, the script should be able to run the appliction when executed as user example. Note that this file needs to be added permission to execute.
Supervisor
Remember earier, we had to use two terminal windowns to run the project: one for the django application and the other for Celery. Here, we are going to use Supervisor to monitor both of them, and let the operate in the background.
Configuration
First, we must install Supervisor if not already installed on our machine.
Once installed, it is configured through a set of conf files that are speific to processes ran.
Both of the scripts should reside under /etc/supervisor/conf.d/
.
hello_celery.conf (for main application)
1
2
3
4
5
6
[program:hello_celery]
command = /devapps/hello_celery/bin/gunicorn_start
user = hello_celery
stdout_logfile = /devapps/hello_celery/logs/gunicorn_supervisor.log
redirect_stderr = true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
hello_celery_celery.conf (for the celery part)
1
2
3
4
5
6
7
8
9
[program:hello_celery_celery]
command = /devapps/hello_celery/bin/python manage.py celery worker --loglevel=INFO
directory = /devapps/hello_celery/hello_celery
user = hello_celery
stdout_logfile = /devapps/hello_celery/logs/hello_celery-celery.log
autostart = true
autorestart = true
redirect_stderr = true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
Launching
Once both files are saved, execute:
Make sure that /devapps/hello_celery/logs
directory exists.
If necessary, use touch
command to create empty files for hello_celery.log
and hello_celery-celery.log
files.
Additional useful commands include replacing the word start
with restart
, stop
and status
.
Also, in case of possible errors, the logs can be inspected using
tail -f /devapps/hello_celery/logs/<logfile>
.
Nginx
The last piece of the puzzle is the proxy sevrer. Here, we use Nginx.
If the server may is, for some reason, not set to listen at usual port 80, the defualt configurations must be updated under /etc/nginx/sites-available/default
, in which line listen 80; should be replaced with the correct port (e.g. listen 1234
).
When updated, the service can be initiated by making a symbolic link to sites-enabled and executing the following:
Configuration
To configure Nginx for to serve our application, we define settings in /etc/nginx/sites-available/hello_celery
.
hello_celery (excluding all comments)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
upstream hello_celery_app_server {
server unix:/devapps/hello_celery/run/gunicorn.sock fail_timeout=0;
}
server {
listen 1234;
server_name example.com;
client_max_body_size 4G;
access_log /devapps/hello_celery/logs/nginx-access.log;
error_log /devapps/hello_celery/logs/nginx-error.log;
location /static/ {
alias /devapps/hello_celery/hello_celery/static/;
}
location /media/ {
alias /devapps/hello_celery/hello_celery/media/;
}
locaiton / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!~f $request_filename) {
proxy_pass http://hello_celery_app_server;
break;
}
}
error_page 500 502 503 505 /500.html;
locaiton = /500.html {
root /devapps/example/static/;
}
}
Launching
Finally, we create another symbolic link.
Now, the application should be running on a server.
Final Words
In this series of three posts, we have come a long way. We created an application from scratch and used Celery as an anynchroneous task scheduler and an AJAX-based mechanism for monitoring the progress. Finally, we have also presented a quick way towards deployment in few steps.
Obviously, Celery offers many more ways of scheduling the tasks, just as much as AJAX APIs offers many more options for handling the requests. The core part is, however, there and it is up to your needs and imagination of what you will use it for.