Requeue on the same queue, or possibility to have server groups?

We have 2 apps (an admin site and a user portal) using the same database, which includes the Hangfire tables. This means that a job started on the user portal can be processed by the admin site, which is not always desirable.

To get around this I’ve created 2 distinct set of queues: low_portal, low_admin, high_portal etc, and neither of the apps declares that they handle the “default” queue.
It works well, except in the case where Hangfire requeues failed jobs (or a Requeue button is clicked from the Hangfire dashboard) then it will add it to the default queue, which no-one is handling.

Is it possible for Hangfire to requeue on the same queue the job was created on? Or is there a better way to setup my scenario?

Yes it is. But the answer you will find is not as straightforward as it may sound. I found, and maybe it is just me, that this is always a tricky thing to get right in Hangfire. It’s easy to end up with what you described (jobs going to default queue) and there are a number of ways to go about achieving your goal depending on your situation. There is another (well several) discussions in this forum on this subject.

For my scenario (and this might or might not be needed for what you want to do) I ended up with a number of extensions to the basic functionality to add :

  • Machine specific queues (which I need for jobs continuations that have to execute on the same machine with machine specific file resources).
  • Preservation of the original queue (so requeues later do not get pawned of to a different machine)

I think your most immediate question is around the last one.

For this I have the following JobFilterAttribute (apologies to the original author of this code, it is not mine but was if I recall correctly obtained in a different topic on this forum. I just don’t remember who or where exactly).

public class PreserveOriginalQueueAttribute : JobFilterAttribute, IApplyStateFilter
{
    public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
    {
        // Activating only when enqueueing a background job
        if (!(context.NewState is EnqueuedState enqueuedState)) return;

        // Checking if an original queue is already set
        var originalQueue = JobHelper.FromJson<string>(
            context.Connection.GetJobParameter(
                context.BackgroundJob.Id,
                "OriginalQueue")
        );

        if (originalQueue != null)
        {
            // Override any other queue value that is currently set (by other filters, for example)
            enqueuedState.Queue = originalQueue;
        }
        else
        {
            // Queueing for the first time, we should set the original queue
            context.Connection.SetJobParameter(
                context.BackgroundJob.Id,
                "OriginalQueue",
                JobHelper.ToJson(enqueuedState.Queue));
        }
    }

It does what it suggests, any job decorated with this attribute on the first ‘run’ will store the queuename as a job parameter (for that job instance) and on subsequent runs (requeues) of that same job instance ensure that the same queue is used again. It still means you can add other filters (or attributes) to modify what the queue has to be on the first run but all subsequent runs are forced to the same queue as initially set.

1 Like

Thanks Hans! I will give it a shot, that sounds like just what I’m after.

This works (at least manually re-queuing from the Dashboard, I didn’t try making it throw an exception and retry)

By the way, I found the original author: it’s none other than odinserj himself :slight_smile: https://github.com/HangfireIO/Hangfire/pull/502

I want to use queues to ensure high priority jobs get processed even when low priority jobs are backed up. I encountered a scenario when low priority jobs which called a throttled api backed up the default queue, causing high priority jobs to be delayed for days. It was the throttling from the API that caused the queue to back up rather than being bound by any hardware limits.

If possible, I would like to avoid a separate worker instance and simply manage it within a single worker that can service multiple queues independently. I do not want another instance to deploy to, manage, etc if I do not have to.

I have separated my jobs into appropriate queues, and applied @Hans_Engelen PreserveOriginalQueueAttribute from above globally. I have tested with jobs locally, and it appears that the queues will be serviced independently, but I’d just like some confirmation as we’ve already been bitten by a few gotchas like this already. Even if [throttled_queue] is backed up with hundreds of jobs, I’d like to be sure that [high_priority_queue] will run it’s jobs quickly.

As pointed out the code was from Odinserj himself. Just want to make sure credit goes where it is deserved.

Anyways, I am currently on vacation so can’t really check much but in theory that should work. There are probably gotchas though. Specifically (off the top of my head) in areas like worker count and such. Much is explained here : Hangfire Features.

My main concern would be queue priority is not something you can tweak heavily with a single worker. It is basically just the order in which you pass them on to the worker at startup. Not much more.

i.e.

Now I could see situations happening where you run out of threads in the pool (default is Environment.ProcessorCount * 5). You could increase that but again the chance exists that long running jobs on a busy setup could starve even your highest priority queue eventually of available threads in the pool. You can reconfigure the default amount of course. But the basic risk remains.

This is where having seperate workers is a much better option. As each worker has it’s own pool of worker threads there is no way you could have a pack of slow running jobs on one queue (and worker) starve the priority queue running on the second worker.

Just my two cents on it.

Thank you for the response!

I was afraid separate worker processes would be the safest bet. I found a relatively easy workaround, and just set up multiple websites pointing at the same directory, configured the queues each listen on through environment variable, and then defined the environment variables for each application pool.

Each process is listening to it’s own queue.