I’ve been seeing an issue where one Job is being processed by multiple workers - in most instances twice but I’ve seen up to three or four workers for one job. I can’t seem to reproduce the issue as it appears to happen very randomly, and the jobs do not fail or reach the process timeout. Our organization uses hangfire for handling incoming order requests for our solutions, so this behavior is causing duplicate orders to process which is not good.
I am on the latest version of Hangfire.Core (1.6.16) and we use Redis for storage. I have gone through almost every thread where others are seeing a similar issue but have not seen any solutions or real answers. Any help is greatly appreciated.
Just an FYI for what I had to do to work around this issue that may be helpful for you and others…
According to odinserj’s comment in this topic the only real way to prevent duplicate processing is to implement logic that can somehow check that the job about to be en-queued has not already been processed by a worker.
For my case, I added a “processed = true” attribute to the data that is stored for each job processed. This attribute is checked prior to each en-queue call for a given job in order to prevent multiple workers from processing the same job multiple times.
In my opinion, this type of scenario should be built into the hangfire engine but it appears this is either not possible to implement or is just a bug that isn’t being addressed. Either way, I was able to prevent this issue from happening again with the logic above built in.
I have configured “Always on” for my application but I do not understand why there are 2 different instances handling this job and why have two workers handle the same job?.
Sorry for my bad english.
Hello,
I have the same problem with a SQLite storage.
My app expose a POST restfull api that execute the same job, with different arguments, based on the request.
If I set WorkerCount > 1 the same job (with the same args) is executed in parallel on all the workers…
I would like that different jobs (same job class but with different args) run in parallel, but not the same job…
I also tried to implement a filter as suggested here but no luck:
I have the same problem. Hangfire 1.6.20, SQLite storage.
All workers process the same job. If I set the server worker count to 2, then the job starts 2 times, if I set it to 30 then the job starts 3 times. I explained the problem in more detail here. What can I do?
I am also facing the same issue. I am using SQLite as storage with default worker count of 20. The recurring job triggered multiple times. When I set the worker count it gets executed once. I would like to use multiple worker counts. Any suggestions how can I fix this ?
Guys, anyone solved this problem? The question is respawned multiple times in different topics and no solid answer. Why using DisableConcurrentExecutionAttribute does not help? Is it an issue of Storage implementation? I use Oracle Storage and custom implementation of DisableConcurrentExecutionAttribute and can see that distribution lock is created with correct name. However, it does not prevent multiple invocation from different threads.
Same problem with version 1.7.9. As I start the app, two threads fires againts the same job.
I did not configure hangfire, but everything looks good. I am completely lost.