Multiple workers processing one Job (Hangfire 1.6.16)


#1

I’ve been seeing an issue where one Job is being processed by multiple workers - in most instances twice but I’ve seen up to three or four workers for one job. I can’t seem to reproduce the issue as it appears to happen very randomly, and the jobs do not fail or reach the process timeout. Our organization uses hangfire for handling incoming order requests for our solutions, so this behavior is causing duplicate orders to process which is not good.

I am on the latest version of Hangfire.Core (1.6.16) and we use Redis for storage. I have gone through almost every thread where others are seeing a similar issue but have not seen any solutions or real answers. Any help is greatly appreciated.


#2

Anyone?? Am I the only one currently experiencing this issue?


#3

I have the same issue with the recurring job and SQL Server.
Applying DisableConcurrentExecution Attribute does not help


#4

Just an FYI for what I had to do to work around this issue that may be helpful for you and others…

According to odinserj’s comment in this topic the only real way to prevent duplicate processing is to implement logic that can somehow check that the job about to be en-queued has not already been processed by a worker.

For my case, I added a “processed = true” attribute to the data that is stored for each job processed. This attribute is checked prior to each en-queue call for a given job in order to prevent multiple workers from processing the same job multiple times.

In my opinion, this type of scenario should be built into the hangfire engine but it appears this is either not possible to implement or is just a bug that isn’t being addressed. Either way, I was able to prevent this issue from happening again with the logic above built in.


#5

Any update?
I have the same issue here.

I have configured “Always on” for my application but I do not understand why there are 2 different instances handling this job and why have two workers handle the same job?.
Sorry for my bad english.


#6

Hello,
I have the same problem with a SQLite storage.

My app expose a POST restfull api that execute the same job, with different arguments, based on the request.
If I set WorkerCount > 1 the same job (with the same args) is executed in parallel on all the workers…

I would like that different jobs (same job class but with different args) run in parallel, but not the same job…

I also tried to implement a filter as suggested here but no luck:

any idea? how can we tackle this kind problems?


#7

I have the same problem. Hangfire 1.6.20, SQLite storage.
All workers process the same job. If I set the server worker count to 2, then the job starts 2 times, if I set it to 30 then the job starts 3 times. I explained the problem in more detail here. What can I do?