We are leaving Russia. We are against aggression and war with Ukraine. It is a tragedy for our nations, it is a nightmare

Is there a way to distribute load evenly?


We have multiple instances processing a single queue. We have some recurring jobs that spawn multiple additionaly jobs to parallelise some heavy-load process.

However, we’re detecting that usually there’s a greedy instance that takes most of those jobs, while the others are sitting idle.

Is there a setting to ensure that instances don’t take new enqueued jobs if there are other instances with less jobs in execution?


I don’t think it’s really design for that.

You could reduce the number of worker threads. If you have Hangfire.Pro you could look at throttling. There is also specifying a QueuePollInterval that isn’t 0. This could prevent the one server from instantly picking up all of the jobs. However, all of these options could lead to under utilized systems and a backlog of jobs if things get really busy.

If one server can handle your workload, that doesn’t seem like a problem that needs to be solved.