Implementing as Generic Job Scheduler

For those of you that use Hangfire to build a “generic” job scheduling solution, how do you architect the “jobs” it runs?


  1. jobs are classes defined in the hangfire server you’ve built; you enqueue those classes;
  2. jobs are scripts (powershell or something) that hangfire executes;
  3. jobs are API calls to rest services, code for the actual “job” are elsewhere;

Yes, we do this. Jobs are classes that implement an interface with a Run method. Jobs just take an id as a parameter and 1st thing job does is load the real parameters from our own job table via the job id passed in.

Hangfire invokes the method on the required schedule via reflection - you can see your assembly qualified class names and serialised parameters in the hangfire tables.

It all works really well and really consistently

Thanks oneillci.

How does the Hangfire server application gain access to these classes. Are they compiled in the same project (so every time you have a new job you need to redeploy the entire application). Something else?

I also had the same question. Any thought Archivist ? What is the final verdict?

No verdict so far. My guess is that we’ll have to either put the DLL’s in the GAC (general assembly) or load them dynamically somehow!