I’m trying to gauge the performance and throughput of my work jobs, based on the numbers available in the Dashboard.
I created a server (currently running within the ASP.NET process) with 8 workers, and ran a main job which triggered 8 sub-jobs (all doing a common task). Stats for the 8 successful jobs (first at the bottom and last at the top) are below:
I noticed that the ‘total duration’ is incrementing from the first job to last. I wonder why is that, because I was expecting the 8 jobs to run in parallel (one per worker) and have nearly the same duration (as they all perform a similar task).
Secondly, under each jobs’ page (screenshot below), what does ‘latency’, ‘duration’ and other time values exactly mean:
Latency is the amount of time between the job being created and the start of the processing of the job:
I assume when you’re talking about sub jobs, you’re referring to the Hangfire.Pro batch functionality ? If so then I’m afraid I can’t answer what the total duration is, but I’d guess that it’s similar to the latency.
Ah okay, in that case the total duration is the latency + the processing time as you expected, in other words the difference between when the job was created and when it was completed:
I just tried to re-create this (Using the Console sample in the dev branch) with the following code but the jobs all run as expected: (Latency of about 100ms, duration of about 2s, total duration of about 2s, console output written in parallel)
Thanks for your help @yngndrw! Here’s the basic code (parts removed for simplicity):
public static void Main()
RecurringJob.AddOrUpdate("UpdateFiles", () => UpdateFiles(), Cron.Hourly);
public static void UpdateFiles()
// get all file records from database
var files = GetFilesFromDatabase();
foreach (var f in files)
BackgroundJob.Enqueue(() => DownloadFile(f.Id, f.URL));
public static void DownloadFile(int Id, string URL)
// download URL from Intranet and save contents locally as text file
File.WriteAllText(Id.ToString() + ".txt", new WebClient().DownloadString(URL));
// update file record in database
The incremental durations (as in my first screenshot above) are for the DownloadFile() method. Each job is being run by a separate worker, so I’m not sure why the duration is increasing by margins even when its parallel and work load is the same (all URL’s are Intranet-based, so no network latency differences). Could something be blocking the workers from finishing each job, due to which the duration is incremental?
Sorry I forgot the parent job in my test. I’ve just tried this with Thread.Sleep in place of the database and file calls and it worked as expected. (With both a background job and a recurring job as the parent.) I would expect that either the database calls, the file writing or the web client downloads are causing some locking as you suggested. I’d suggest that you try commenting each part out and see if any particular combination is responsible for the behaviour you’re seeing.