Distributing work across multiple workers

Tags: #<Tag:0x00007f458f7f9dc0> #<Tag:0x00007f458f7f9cd0>

I’m looking at an environment setup with 1 host responsible for hosting the dashboard and registering jobs, and 2-10 workers responsible for processing jobs

The documentation suggests that this is possible (e.g. https://docs.hangfire.io/en/latest/background-processing/running-multiple-server-instances.html), but I’m are unsure how this should be implemented in order to ensure a solution in which the work is evenly distributed across the workers.

In a simple scenario with 2 jobs registered to run every 20 seconds (*/20 * * * * *) and just 2 workers, the first worker which is started processes both of the jobs.

I see people in the community are suggesting to use queues, and assign specific jobs to specific queues and specific workers to process specific queues. That’s problematic for several reasons:

  • developers will need to decide upon a queue
  • it’s not distributing the work evenly as one worker might be doing no work if the queue(s) it’s polling is empty
  • it’s not scaleable as we cannot just spin up extra workers without also knowing implementation details of the queues

Am I missing something, or are queues really the solution?

1 Like