Single Dashboard, Single Physical Server, Multiple Jobs/Projects/Codebases. Advice or Examples?

Hi,

I’m testing out Hangfire at our company, and would appreciate some advice on the easiest way to structure things. What we want is a single dashboard to manage all our jobs. Everything would be running on the same physical server, and our jobs would all be in separate codebases/projects. I’ve read a bunch of documentation and discussion, but haven’t really found a reliable, up-to-date solution. Or at least not one that I could understand :wink:

What’s the easiest way to get a single-dashboard solution?

  • A single web (or whatever) project acting as Hangfire client + server instance, that references multiple jobs compiled into their own DLLs? (Jobs don’t know about Hangfire)
  • One Hangfire dashboard project, and each job project being its own Hangfire server instance? (Jobs do know about Hangfire)
    • Would jobs be enqueued from the project running the dashboard, or could each job project also act as a Hangfire client and enqueue itself as long as they’re using the same Hangfire DB?
    • Are job queues relevant here?
  • Both approaches require common assemblies with Interfaces for job methods, right?
    • Do I need to use IoC containters too?

Any advice is appreciated. Example projects would be even better, especially if it includes how to set things up in Visual Studio. For reference I’m currently using a basic ASP.NET project for client, dashboard, and server using OWIN extension methods, which references a separate VS project for my job.

2 Likes

Bump. Anyone have any experience with this?

Sorry, i can’t share an example project right now… but i think it won’t be hard to understand:

Basically you’ll be having 3 scenarios:

  • Clients: The applications that enqueue/schedule jobs.
  • Servers: The applications that executes those jobs
  • Dashboard: The place where you see the things that are happening :wink:

The servers must know about the Jobs, as they will be executing it.
The clients must know at least the interface of the Job. (D.I.P.)
The dashborad just need to be connected to the Hangfire database.

All the 3 must be using the same Hangfire database.

If you’ll be following the Dependency Injection Principle for your jobs, and then en queuing/scheduling by the Interface, your Hangfire servers will need to have an IoC container to resolve those dependencies.

As all servers will be using the same Hangfire database, you’ll need to be very restrictive on the “queues” each server execute, 'cos based on the queues they’ll be fetching jobs from the database to execute, and, the server must have the implementation of the job he’s fetching or it could never run.

So, you could have an ISendEmailJobs, marked to run on the email queue:

[Queue("email")]
public interface ISendEmailJobs {
    void EmailType1();
    void EmailType2(string subject);
}

in the application that will do send the emails, the hangfire server must have the queue email :slight_smile:

public static IApplicationBuilder UseHangfire(this IApplicationBuilder app)
{
    app.UseHangfireServer(new BackgroundJobServerOptions()
    {
        Queues = new string[] { "email", "otherqueue" }
    });
}

and, the servers that won’t send email (and, probably won’t have/know those jobs implementation) must not run the email queue.

The Dashboard could run on any of those applications, or in all, or even in another one.
It’s up to you.

Hope that i’ve helped.

1 Like

Thanks for the great answer, Lucas!

In my case, the scheduling is very simple, and I’m not doing any dependency injection. So basically, the easiest way to do it is

  • One application running the Dashboard
  • One application per job, acting as both client and server, with its own queue. Optionally, one of these applications could run the dashboard.

Yes?

The dashboard just need to be connected to the Hangfire database.

I was under the impression that, in order to manually trigger or retry jobs via the Dashboard, the dashboard application needed to know about the job somehow (like the job’s interface). Is that not the case?

The Dashboard could run on any of those applications, or in all, or even in another one.

In all of them? Wouldn’t that result in each job having its own dashboard, instead of all jobs sharing a single dashboard?

I’ll try it out and report back with my results. Thanks again for the help. :slight_smile:

Edit: Yeah, I have one application acting as Dashboard+Client+Server for job A in queue_a, and another application acting as Client+Server for another job B in queue_b. Job B executes fine, using the right server, but the dashboard says “Cannot find target method” for the name of the job, doesn’t show its arguments, and you can’t manually requeue the job. Looks like the application running the dashboard should know about every job.

I’d guess this should be accomplished by putting each job’s interface in an assembly referenced by the dashboard project, and then I think I need to use IoC containers, don’t I? That seems more complicated than I’d like. At that point would it be easier to have every job compiled into a DLL, and have a single application acting as Dashboard+Client+Server that references them all?

1 Like

Alright, I think I’ve gotten everything figured out. Here’s my “shared dashboard for dummies: visual studio edition” guide. It assumes you’ve already got a separate project running the Hangfire dashboard (and optionally, serving its own jobs on its own queue).

  • Make a new Visual Studio ASP.NET project with the blank template
  1. Install the NuGet packages Hangfire and Hangfire.Autofac

  2. Add a new C# class using the template “Owin Startup Class”, which we usually name Startup.cs
    In the Configuration method, add the lines:

             public void Configuration(IAppBuilder app)
             {
                 //Build Autofac IoC container
                 var builder = new ContainerBuilder();
                 //Link our jobs interface to its implementation
                 builder.RegisterType<Job>().As<IExampleHangfireJob>().AsImplementedInterfaces().InstancePerBackgroundJob();
                 var container = builder.Build();
                 GlobalConfiguration.Configuration
                     .UseSqlServerStorage("DefaultConnection") //The connection string for our hangfire database, defined in web.config
                     .UseAutofacActivator(container);
                 //This server only processes jobs from this queue
                 app.UseHangfireServer(new BackgroundJobServerOptions() { Queues = new string[] { "example_queue" } });
                 //Autofac will instantiate the class implementing this interface. The dashboard knows about the interface and can thus display the job properly.
                 BackgroundJob.Enqueue<IExampleHangfireJob>((x) => x.DoSomething());
    
  • Write the interface defining the job. Make this a separate VS project so you can easily build it as a DLL. This project should also include the Hangfire nuget package (because of the queue attribute). Just create a standard class library with one file, IExampleHangfireJob.cs, containing the following:

          public interface IExampleHangfireJob
          {
              [Queue("example_queue")] //Specifies that the job only runs on this server's queue. The queue attribute must be in the interface, not the implementation.
              void DoSomething();
          }
    
  • Now write the actual implementation for the job. This can be in the same Visual Studio project containing Startup.cs. Create a Job class implementing IExampleHangfireJob, and give it a public DoSomething() method, wherein the job actually performs its tasks.

    • The DoSomething() method can accept arguments, which will be serialized for the database. Keep in mind that if these arguments are custom types/classes, the dashboard will also need to know about them!
    • Important: Queue names can only contain lowercase letters, underscores, and numbers!

Build the interface dll and add it as a reference to the dashboard project. You could also make the interface a shared assembly and add it to the Global Assembly Cache on the machine running the dashboard, if you don’t want to recompile the dashboard project.

I think another valid, and possibly easier, approach would be to have a single web application acting as dashboard, client, and server, then compile all your jobs as assemblies referenced by said project. You wouldn’t need to bother with interfaces, queues, or IoC containers, but you would also be forced to run all your jobs from the same physical machine and you’d probably be recompiling the web app frequently.

1 Like

Yes, i see you already have figured it out. rsrsrs.
Well but, if someone hasn’t …

The application running the dashboard must need to know at leas your job Interface to display it correctly. So, putting all the Jobs Interfaces in one assembly is the best way to achieve it. As it’ll be much more easy to reference that assembly in all other applications involved. And, using your jobs trough an interface (for en queuing/scheduling/etc…) will drive you to needing and IoC container for the JobServer, so it can get and instance of your Job class for calling it’s method.

Reading this thread was very helpful in finding out how to setup a good Hangfire solution. I love the idea of setting it up so each server has a specific queue and the library references to run exactly those jobs. It just makes it all neat and nicely scale-able.

HOWEVER!!!
There is huge issue in Hangfire (v1.7.11) that I was unlucky enough to encounter when using this approach. If you are using recurring jobs, scheduled jobs or re-enqueue jobs with this solution you will run into weird FileNotFoundException: Could not load file or assembly errors.

The issue is #595 on the GitHub project, and there are a number of duplicates or related issues so it can be hard to get exact information or a simple solution.

Simply put, Hangfire (v1.7.11) has a big flaw in its way of handling recurring/scheduled jobs, as it doesn’t check if a server is set to manage the specific queue before assigning the job to it. This means that there is a chance your perfectly separated jobs get handled by the wrong server and promptly fail as it doesn’t have the needed library references.

Workaround would be to have all your Hangfire servers have a reference to all your libraries. This sadly ruins most of what was nice about this whole solution.

1 Like

It is not specific to v.1.7.11 it was always like this and it seems it will be. The best possible solution is to use different schema for the hf servers if you don’t mind having 11 db tables per hf server instance in your 1 database.
Still i am searching a solution for setting up hf dashboard for the specific schema only, for a database which is having multiple schemes.

You can’t have different code bases sharing the same Hangfire storage, even if they run different queues, unless you also use specific storage schemas for each code base.

You can set the SQL Storage Schema with the following code:

GlobalConfiguration.Configuration
	.UseSqlServerStorage(hangfireConnectionString, new SqlServerStorageOptions
	{
		SchemaName = "MySchema",
	});
1 Like

But using another schema is essentially the same thing as using a separate database, so it doesn’t solve anything.

The basic truth is that Hangfire has some design flaws in its handling of multiple queues. This is most noticeable when using recurring jobs or scheduled jobs, since they seem to ignore queues completely. Until those issues are solved there is nothing that can be done except make a separate database or schema for each codebase.

Hi you have some solution for this problem???

@mugwhump already posted their solution: Aug 24, 2017 9:50 pm and Aug 29, 2017 1:16 am

However as then mentioned later in the thread, you can’t do this with recurring jobs or scheduled jobs.
See: May 18, 2020 1:20 pm

Please read the full thread for context. But if you want a TL;DR version, I would simply say don’t use Hangfire at all until these 4 year old major bugs are fixed.

1 Like

I do use it in that scenario. For years.
You just will need a filter to re-enqueue or put your scheduled and recurring jobs into the right queue.

I’m on the mobile right now, so I can’t put an example here right now, but there are some on the forum. Searching for re-enqueue and queues* and filter can lead you to an example.

When I get back home I can put one here…

For Recurring Jobs, you can specify the queue name on the RecurringJob.AddOrUpdate(), it’s the last argument and is defaults to the 'default' queue.

StateElection filter example that put’s the machine name on the queue when not in production environments:

public sealed class IsolateEnvQueuesFilter : JobFilterAttribute, IElectStateFilter
{
    private readonly IHostingEnvironment env;
    public IsolateEnvQueuesFilter(IHostingEnvironment env){ this.env = env; }

    public void OnStateElection(ElectStateContext context)
    {
        if (context.CandidateState is EnqueuedState enqueuedState)
        {
            var machineName = Environment.MachineName.ToLowerInvariant();
            if (!env.IsProduction() && !enqueuedState.Queue.StartsWith("_"))
            { enqueuedState.Queue = $"_{machineName}_{enqueuedState.Queue}"; }
        }
    }
}

Another read: Jobs being re-queued in the wrong queue

Just want to re-iterate that this is the only option for recurring and scheduled jobs. I’ve tried all the other suggestions in this thread and across the internet (including this), and unless you have all servers able to run all jobs you will hit the FileNotFoundException: Could not load file or assembly errors. error once some of your recurring/scheduled jobs fail.

It looks like version 1.8 will include a workaround/fix for this issue. But for now you will need separate dbs or schema to isolate recurring/scheduled tasks.

1 Like