Hi, I’ve implemented HangFire in a system with 1 complex job that runs in parallel 2000 times with different parameters using fire-and-forget method with Enqueue method.
Example: jobId = client.Enqueue(() => CalcularLiquidacion.Execute(myParams, null));
-myParams is a Dictionary<string, string> parameters
-the second parameter is a Hangfire.Server.PerformContext object context in order to get the JobID.
In development and testing environment everything is OK. I made some tests using 80 worker threads and there is no problem. This tests are made using 2 physical servers and everything works OK on both.
In production environment I get mixed parameters values between different queued jobs. I have VMWare and I have 4 cores assigned with 8GB. If I set 8 worker threads (using default queue) the server starts to process jobs up to number 8 the next job (number 9) and the following are processed using the same parameters values of the first queued job. The same occurs if I set 16 or 4 worker theads…with 16 the number 17 use the parameters of the first and so on…the same with 4…the number 5 use the first parameters. No errors, no exceptions…
I’ve searching for a while but I can’t find nothing connected with this problem.
Does the server architecture has impact in the thread pool functionality?
Does VMWare presents a potencial problem in the thread pool?
Honestly I don’t have any idea about this problem.
Thanks in advance,
Gaston