How do I prevent creation of duplicate jobs?

I have a task that runs every 10 minutes, which can also be scheduled by users and other interactions with the site.

In all cases, I want to only add that new job if there isn’t a job with that method signature already waiting in the queue… Is that possible with Hangfire, and what is the best way do you guys think of going about it? I’d prefer not to query the DB directly, or maintain my own separate queue on top, if possible.

Thanks in advance,

Take a look at the documentation, I think giving a recurring task an identifier is what you’re looking for:

I asked a similar question: Singleton method based on unique parameter values but got no response so far. May be it’s not possible to do.

1 Like

Thanks Rob - that’s great for setting up a recurring task (and this is what I do) - but it doesn’t account for when I have multiple tasks queued up by external means - in that case, I don’t want to queue the task if the same method signature already exists.

I also found this gist, which seems to work for the most part. It does fail if the method names are too long - but that’s fairly easy to workaround.

It also doesn’t seem to do well with the ContinueWith jobs, and throws an exception when the job is created. So this isn’t really suitable in my case.

Value cannot be null. Parameter name: parentId

System.ArgumentNullException: Value cannot be null.
Parameter name: parentId
   at Hangfire.States.AwaitingState..ctor(String parentId, IState nextState, JobContinuationOptions options, TimeSpan expiration)
   at Hangfire.States.AwaitingState..ctor(String parentId, IState nextState, JobContinuationOptions options)
   at Hangfire.BackgroundJobClientExtensions.ContinueWith[T](IBackgroundJobClient client, String parentId, Expression`1 methodCall, IState nextState, JobContinuationOptions options)
   at Hangfire.BackgroundJobClientExtensions.ContinueWith[T](IBackgroundJobClient client, String parentId, Expression`1 methodCall, IState nextState)
   at Hangfire.BackgroundJobClientExtensions.ContinueWith[T](IBackgroundJobClient client, String parentId, Expression`1 methodCall)
   at Hangfire.BackgroundJob.ContinueWith[T](String parentId, Expression`1 methodCall)
1 Like

I’m not convinced that that exception is caused by the DisableMultipleQueuedItemsFilter job filter.

The parentId parameter that the AwaitingState constructor is expecting is just passed straight through from the BackgroundJob.ContinueWith call. It looks like you’re just not passing in the ID of the parent when you call it.

Interesting - you’re right, of course. The JobFilter cancels the job, so it doesn’t return an id from Enqueue(). The ContinueWith task I run then uses that return value, always expecting a result.

var id = BackgroundJob.Enqueue<ITask1>(x => x.Execute());
BackgroundJob.ContinueWith<ITask2>(id, x => x.Execute());

So that’s great - it looks like this Filter will work just fine, and I just need to be a little more defensive in my ContinueWith’s

var id = BackgroundJob.Enqueue<ITask1>(x => x.Execute(collection));
if (id != null) {
    BackgroundJob.ContinueWith<ITask2>(id, x => x.Execute(collection));

So thank you for the observation, yngndrw - that’s really helpful!

Happy to help, glad you resolved the issue.

I think that the XML documentation on the Enqueue extension method should specify that the returned ID may be null if a job filter cancels the creation of the job, as the documentation is right now it makes it look like the job will always be created.

Hi there,

Does this still work in the current version of hangfire? I’m using Hangfire 1.6.2. at the moment.
I’ve added this code and decorated a method that’s called from hangfire with [DisableMultipleQueuedItemsFilter].
But it doesn’t seem to work.

I’ve set multiple breakpoints in the job filter, but none of them are being hit.
Am I missing something?

Where did you place that attribute? Are you using base classes or interfaces in your background job types? Also, what storage are you using?

I have the same problem as mitchell.amasia. I’m using Hangfire 1.6.17 and Hangfire.PostgreSql storage.
My dashboard is in a separate process in ASP.NET Core so I decorated my job interfaces with this attribute. I discovered that IClientFilter.OnCreating (as well as IClientFilter.OnCreated) method aren’t called. Therefore fingerprint isn’t added.

I’ve found what causes the problem. It is caused by this line. I get IBackgroundJobClient using DI. IJobFilterProvider is added to service collection using only filter provider with global filters (so there is no JobFilterAttributeFilterProvider when using DI), BackgroundJobFactory receives only global filters and not those added by attributes.

Thank you Kevin_Blake, this really helped me out alot!

I set it up using
GlobalJobFilters.Filters.Add(new DisableMultipleQueuedItemsFilter());
before services.AddHangfire(

I found that if the job was canceled because of a server shutdown, the OnPerformed() filter was removing the fingerprint even though the Job was being re-queued. This was causing duplicate jobs. To handle this, I modified that method as follows (using Serilog logging syntax):

	public void OnPerformed(PerformedContext filterContext)
		bool isCanceled = filterContext?.CancellationToken?.ShutdownToken.IsCancellationRequested ?? false;
		if (!isCanceled)
			Log.Information("Job `{jobId}` was performed: removing fingerprint `{fingerprint}`", filterContext.BackgroundJob.Id, GetFingerprintKey(filterContext.BackgroundJob.Job));
			RemoveFingerprint(filterContext.Connection, filterContext.BackgroundJob.Job);
		} else
			Log.Information("Job `{jobId}` was canceled: fingerprint `{fingerprint}` is preserved", filterContext.BackgroundJob.Id, GetFingerprintKey(filterContext.BackgroundJob.Job));

Hi, might misunderstood the issue but when I had issues with duplicated jobs I prevented the job registration. Used the job name as an id and when I create a new job check if a job with the same name exists.

public bool IsJobEnqueued(string jobName, Queue queue)
    using (var connection = JobStorage.Current.GetConnection())
        var api = JobStorage.Current.GetMonitoringApi();
        var enquedJobCount = api.EnqueuedCount(queue.ToString());

        if (enquedJobCount == 0)
            return false;  // no tasks

        var enquedJobs = api.EnqueuedJobs(queue.ToString(), 0, (int)enquedJobCount).ToList();

        foreach (var job in enquedJobs)
            if (job.Value.Job.Args.Contains(jobName))
                return true;

    return false;

does anyone came across duplicate scheduled jobs?
We are facing intermittent issue which occurs once in while.
Issue: same delayed job inserted to job table twice.