The issue is with the startup command. You've essentially started each worker serially, so celery -A app worker -l info -Q parser-n worker2
won't execute until after worker1 exits. The easiest way to fix this is just to have separate docker instances for each worker.
email-n:
restart: always
build: ./app
volumes:
- ./app:/app/
depends_on:
- redis
command: bash -c 'python3 manage.py makemigrations --noinput &&
python3 manage.py migrate --noinput &&
celery -A app worker -l info -Q email-n worker1'
...
parser-n:
restart: always
build: ./app
volumes:
- ./app:/app/
depends_on:
- redis
command: celery -A app worker -l info -Q parser-n worker2
sms-n:
restart: always
build: ./app
volumes:
- ./app:/app/
depends_on:
- redis
command: celery -A app worker -l info -Q sms-n worker3
celery:
restart: always
build: ./app
volumes:
- ./app:/app/
depends_on:
- redis
command: celery -A app worker -l info
beat:
restart: always
build: ./app
volumes:
- ./app:/app/
depends_on:
- redis
command: celery -A app beat -l info
api:
restart: always
build: ./app
volumes:
- ./app:/app/
depends_on:
- redis
command: python3 manage.py runserver 0.0.0.0:1337
Alternatively, you can start one worker with multiple queues, e.g., -Q parser-n,email-n,sms-n
. Finally, you can also daemonize docker inside your container but then you have to have a graceful way to stop daemonization when you are ready to stop the container, but that is outside the scope of this question.