We left Russia. We are against aggression and war with Ukraine. It is a tragedy for our nations, it is a nightmare

DisableConcurrentExecution and prevent queuing of new job instances while existing job is in progress

I am relatively new to Hangfire but already find it incredibly useful. I have however come up against an issue that I am not sure how to resolve…

I have a couple of recurrent jobs that import data from an external service and run once per minute. That service is throttled so sometimes the jobs run longer than a minute and when they do the next task is already queued up and ready to execute This then compounds so I end up with lots of the same job in the queue that are not required.

In my interfaces I have added the following 2 attributes on the methods in question:

[DisableConcurrentExecution(10)]
[AutomaticRetry(Attempts = 0, OnAttemptsExceeded = AttemptsExceededAction.Delete)]

If the previous job is still running when the next job is due to be executed it can be skipped until the next iteration but I am not sure how to achieve that?

I’m using Hangfire for recurring jobs as well and I have encountered the same thing. I know that there are extensibility points via the ‘server filters’ and there’s 2 main interfaces to tap into where you can alter the server / scheduler behavior (I think the other is called something like client filter).

In this case I don’t know if you could prevent it from scheduling / attempting to execute the scheduled job but I would imagine you could intercept the scheduling process or the processing pipeline and perform a check to see if there is another job of the same type running and if so, short-circuit and return a well-known result that indicate an outcome of ‘skipped/another instance running’ or something along those lines.

The one trick with building the ‘is another job running’ logic is that if the other job is dead node/not-responding type of deal you would need to account for whatever resiliency considerations would apply.

Hangfire itself likely handles the dead node/job detection & handling scenario and your scheduling of the ‘next run’ would simply be bound to that process and its dead node/job detection & handling timeframes.

Ideally, this would be something available from Hangfire itself but until then you could likely implement your own. I could see where some implementations may prefer/require the current behavior but it would be nice to have a choice between the 2 models.

https://docs.hangfire.io/en/latest/extensibility/using-job-filters.html

Sorry for the delayed reply, I didn’t get a notification. Thank you for the tip, I will take a look at the job filters and see if it is possible to achieve something to prevent the build of jobs I am seeing due to this issue.