Roof over a tavern, Ireland 2016.

## Intoduction

Celery has been a way to go when it comes to scheduling python jobs for quite some time. With increasing interest in python, often driven by machine-learning, Celery is often found a solution to the problem of executing lengthy computations on the server side. Regardless from whether your application has been designed with handling extensive calcuations to begin with or it just grew bigger and you are looking for a way to improve its responsiveness, chances are you have already stumbled across Celery.

In this post, we show a simple way of organizing computations through a series of atomic Celery tasks and appointing one master task to keep an eye on the progress of its children. What is more, we will aslo discuss how to organize the communication such that it is easy for the user to track progression.

## Preparation

Assuming you have your django project set up as well as celery installed and integrated into the project, you are pretty much ready to start. Now, imagine that you have a number for heavy computational tasks that you would like to execute in order. It matters less what these tasks are. They could be some bluky signal processing or optimization calculations as much as a simple for-loop counting to one billion. What matters is that they are atomic and take enough time to complete that it is justified to define them separately.

For simplicity, we will assume they are simple for-loops with time delay. Also, for simplicity, we will define in the same tasks.py file belonging to one application. In case your project consists of several applications in django sense, all you need to take care are two things:

1. You ensure correct import (e.g. from .yourapp.tasks import some_task_function) in the parent tasks.py file.
2. You have registered all files containing celery tasks in celery.py file, so that they are recognized as celery tasks.

#### MainProj/MainProj/celery.py

Here, we have implicitely assumed we have two applications: parent and children, each having its own tasks.py file. However, there can be more or there can be just one application, in which tasks.py will contain both parent and children tasks.

As mentioned before, children tasks can be anything. Here, we wil define just one here, and yes… a for-loop.

There are some interesting things related to this implementation. First of all, we define an optional argument testmode that defaults to False. The reason for it is purely for debugging. When designing the tasks for to calculate something specific, we will certainly like to have a way of testing the function before we execute it in an asynchroneous more. The testmode argument will prevent an error when having current_task undefined.

Secondly, exceptions occuring in asynchroneous tasks can be a real pain to handle, as they do occur in a parallel process. For this reason, we surround the body of the function with try-except statment and let possible exeptions escalate in a contolled way. By defining a dedicated field in meta, as well as the outcome to store the information, we can propagate the error information all the way to the front end. Having this set up, we are free to define custom exceptions by using raise statement, which we can use for assertions. This is especially useful when designing tools that are research oriented.

Finally, it also important that we pass a consistent output from the function regardless of whether the function exits in a clean way or has raised an exception. As we are soon going to monitor this task with another tasks, we can save ourselves a lot of trouble just by having outcome have the same structure as meta.

Once we have all our children tasks functions defined, we are ready to define the parent task. The parent task can (and should) be used solely for executing and monitoring the progress of its children.

This function defines the execution flow of the sequence of operation if its children. In case one of the children raises an exception, we know it it will exit and return its outcome to the parent task that will also exit passing the exception information farther.

To monitor the progress of a child, we use another function watch in line #15. For simplicity, we can define this function in the same file.

The essence of this function is a while loop which will keep the parent task monitoring a child. For this reason, its is executed synchroneusly inside main_flow (not through .delay() method).

For as long as the child process is running, we can pass all of the meta information from it to the parent process, simply by assigning its meta data to the parent’s. However, because it is a while loop, it is extremely important that you are careful about exit condition(s). For this reason, we define two ways this function can break the loop and exit:

1. The child process can either exit with "OK" or "ERROR" status, making the monitoring no longer necessary.
2. The watch function can too raise an exception, for example due to some delays of the operating system, which may end up the child no longer be able to pass any information or simply becase we mislabel one of the keys in meta during development! In either way, we need to make sure we are prepared for this case. Otherwise, we can end up in the parent tasks being locked on forever, making killing of process a rather difficult operation. In fact, we should define the third mechanism of timeout in case the child task process is “PENDING” and never gets started. However, this is beyond the scope of this post.

Another thing is that the monitoring function should not be to “oppressive” with polling for the children’s status or else it can make things slow. This is why we include is a small delay in the watch’s loop to be e.g. 100ms.

Finally, all celery tasks’ results end up in the database by default. In case it is not our intention to keep that information any longer, we can delete that reference from the database under finally statement, which makes it done irrespectively of what happens.

## Exceuting in views

The only thing that remains is to execute the main_flow function somewhere and it will take care of the rest. By all means, it is the simplest thing. All we need to do is to define a specific function in views and execute it using .delay() method).

#### Somewhere in views.py

For as long as we pass the task.id we can use other views and AJAX to monitor progression and pass information of potential exceptions, just like in we did earlier.

## Conclusion

In this post, we have demonstrated a way to use a dedicated celery task to monitor several other celery tasks executed in sequence. We have aslo discussed propagating information about potential exceptions and ways to ensure that we do not end up in an infinite loop when monitoring the tasks.

#### Hey! Do you mind helping me out?

It's been 4 years since I launched this blog. Now, I would like to bring it to the next level. I want to record some screencast tutorial videos on the very topics that brought you here!

If you want more of the stuff, you will help me greatly by filling out a survey I have prepared for you. By clicking below, you will be redirected to Google Forms with a few questions. Please, answer them. They won't take more than 5 minutes and I do not collect any personal data.

Thank you! I appreciate it.