Trigger-based jobs

Hello,

I’ve spent the afternoon hacking around Hangfire, and I couldn’t quite get what I wanted. I’m not sure if I’m using the wrong tool for the job, so here I come:

I’m setting up a data aggregation system. Put it simply: I need to pull data from various sources (database, fileshare, websites, …) on variable intervals (some data sources are updated daily, other monthly, but never at a predictible moment). I also need to be able to monitor the jobs, and trigger one manually if needed.

As I obviously needed a scheduler for the task, I considered Quartz.net and Hangfire, and picked the latter, mostly because of the dashboard. But now, I’m meeting dead-ends no matter how I decide to design my system.

My first try was to create recurring jobs. For instance, I’d have a job triggered every minute that checks a fileshare, and import the data of the file if found. It looked pretty good, especially since I could launch manually the job from the “Recurring jobs” tab. However, I have a monitoring issue: I need to know the last time the job managed to find a file to import. Problem is: we have only two end-states available: succeeded and failed. When no file is available, the jobs starts, scans the fileshare, and exits. Still, that’s a successful execution. How to distinguish them from the times the job actually found a file?

The website indicates that the states subsystem is extensible. I therefore decided to create a new state to distinguish both cases. It was easily done by using a JobFilter. Unfortunately, the “Succeeded” page of the dashboard wouldn’t list my new state (well, I can’t say I’m surprised). So I decided to add a new page to the dashboard for that. Unfortunately, the IMonitoringApi object only has methods to return specific states (like succeeded or failed), but not a custom state (even though the implementation is generic in the SqlServerMonitoringApi, through the GetJobs method!!!) . Dead end.

I’ve also considered dividing the jobs. The first job would monitor the fileshare, and launch the second job when a file is found. The second job would do the actual importing. Unfortunately, there is no way in the dashboard to filter jobs by name, so I end up with the same issue as before: impossible to quickly find the executions of the second job in the middle of all the executions of the first job.
Note that I could modify the page and filters the results on IMonitoringApi.ScheduledJobs(0, a_very_big_value), but it seems hardly efficient.

Last but not least, I’ve considered dividing the jobs, then putting them in separate queue. Still the same issue, there is no way in the dashboard to filter the jobs by queue.

This is really frustrating because, execution-wise, Hangfire provides all the features I need, and is really easy to use. But on the reporting side, I’m stuck because it’s missing filtering methods. The dashboard has many extension points, but no matter which one I try, I end up stuck because the IMonitoringApi object isn’t even remotely generic enough.

Am I missing something obvious?

1 Like