The idea will be explained through Docker containers.
Create a docker network (called 'cel') to have our test containers all connected but isolated:
docker network create cel
With Docker it's just as simple as:
docker run --rm --net cel --name myredis redis:alpine
Now we have a 'myredis' database running, reachable from other containers that will be added to the network 'cel'.
Let's create a new container with the latest stable Python
docker run --rm -it --net cel --name myworker python:3.5.2-alpine ash
Install libraries and add a user test
.
This is because celery workers cannot be executed as root for security reasons.
/ # pip install redis celery requests
/ # adduser test -D && su - test
Now write your celery app and tasks inside.
See for example worker.py
.
Then launch your celery worker on the python module (e.g. in this case 'worker'):
celery worker -A worker -l info
The worker is now connected and ready to execute tasks when they are queued on redis.
Run a new container to become the client
docker run --rm -it --net cel --name myclient python:3.5.2-alpine ash
/ # pip install redis celery requests
This client must access the same code of the worker, (e.g. the worker.py
file),
and use it to call delayied tasks.
About delay, see the relative celery documentation
An example of 5 asynchrnous tasks is in the file client.py
.
Test the queue with
python client.py
and check what happens on the worker side.
Remove containers and private network:
docker stop myredis myworker myclient
docker network rm cel