Hangfire.Pro.Redis connection leak when credentials not valid

We’ve been investigating an issue recently across our redis instances where they hit the max configured connections. Upon investigation we have found this is related to Hangfire.Pro.Redis, on environments on specific microservices which are not in use we did not have the credentials for redis configured.

Hangfire.Pro.Redis gets into a retry loop of connecting to the server, issuing a command and being rejected for not providing an authentication key.

When this happens, the connection that errored doesn’t appear to be closed down properly, and remains open forever, we have noticed this on some of our kubernetes pods over time for example have Hangfire.Pro.Redis holding open 2000+ connections to redis.

I’ve included an example graph which shows the connection count of redis with 3 kubernetes pods that have a hangfire server running without the redis credentials set. You can see the connections grow across 11/28, and then we corrected the credentials and restarted these 3 pods around 2pm where the number of connections to redis are now stable and no longer increasing.

On every application we have seen this on so far, this has been on Hangfire.Pro.Redis 3.0.0.

Thanks for reporting this! Unfortunately can’t reproduce this just by trying to connect to Redis instance with a wrong password – connections are retrying, but not leaking, so possible some race condition here in reconnection logic. Can you tell me the following things:

  • Your configuration options related to Hangfire.Pro.Redis.
  • Exact version of Hangfire.Pro.Redis package you are using.

I have also just released Hangfire.Pro.Redis 3.1.0 with faster reconnection logic, with a change related to Dispose method – it now doesn’t wait until pending commands processed, so potentially could fix this behavior as a side effect.