Home

March 2, 2022, 2 min read

Ensure python-redis lock released when systemd service stopped

We encountered a problem with a python process being stopped and related resources not properly finalized. More concrete, we are using https://github.com/redis/redis-py for its locking contextmanager, like so:

import redis
from django.conf import settings

cache = redis.StrictRedis(**settings.REDIS)

with redis.cache.lock(...):
   do_long_processing()

That code runs as part of a custom management command (basically just a script) which is wrapped in a systemd service so it runs in the background of our server and restarts itself if needed. During deployments, we temporarily stop that service so it has a chance to pick up the new code by running:

systemctl stop the_service

However to my suprise the lock created above is not released in this case (as can easily be debugged in the redis-cli with the KEYS <key-name> command. The release should happen automatically as part of the contextmanager's __exit__() method, but that method is not executed here.

After some digging I came around this article:

Basically, by default, systemd will send a SIGTERM signal to the python process of the service that is being stopped and python doesn't handle this gracefully, aka it's a harsh stop and finalization code like __exit__ or finally clauses do not get a chance to run. What we can do instead is tell systemd via the service's unit file to send a SIGINT instead, which in python pops up as a KeyboardInterrupt, and that we can handle as needed inside our own code (e.g. when there's extra code to ignore certain failure conditions and keep going, as was in our case).

[Service]
KillSignal=SIGINT