Hangfire JobActivator missing Null Constructor

I have a class that runs as a ConsoleApplication background job. The JobClass has a null constructor. However, Hangfire (Dashboard) complains that the class does not have a null constructor.

  • I do note, in the Dashboard error message, that the namespace of the calling class was injected into the backgroungjob container. Could this be the issue? Do they need to be in the same namespace and the error message is a side-effect?.
    ** Note as well that this does run as a static class, but I am trying to get background job distributed lock working per [DisableConcurrentExecution(timeoutInSeconds: 10 * 60)], so
    that I can queue multiple jobs, yet constrain each server to running jobs sequentially to
    avoid config file clashes. The jobs themselves contain highly and deeply nested parallel processing, so each server will be saturated. My intent is to set up a SignalR Scaleout backplane across a network of Hangfire Servers - each server having a WebFarm of Windows VMs (running in VMWare Fusion) managed by the local Hangfire server.).

Up to now (see previous posts) I have muddled my way through the combinations without guidance and gotten it mostly working. I think I’ll need some help on this one…but I’ll keep banging away at it to see if I can find a way to get it working. Any help is greatly appreciated.

Here’s the call Setup:
KESConsoleRunner.KvmBackgroundJob backgroundJob = new KESConsoleRunner.KvmBackgroundJob();
backgroundJob.SubmitJob(
runstring,
mongoDBConnectionString,
JobStoreConnectionstring
);

Heres the Backend Job Class
namespace KESConsoleRunner
{
public class KvmBackgroundJob
{
public KvmBackgroundJob()
{
}
public void SubmitJob(
string runstring,
string mongoDBConnectionString,
string JobStoreConnectionstring
)
{
Program.ExecuteBackgroundJob(
runstring,
mongoDBConnectionString,
JobStoreConnectionstring
);
}
}
}

Here’s the Dashboard error message
using KvmTLA.Services;
Job job = Activate();
job.ExecuteBackgroundJob(
“E:\Transactions\Transaction_Run_20140814_Set-1(RevisedModels)|Transaction_Run_20140814_Set-1(RevisedModels)_KES|Transaction_Run_20140814_Set-1(RevisedModels)_DocType|5445a23c7f9dbf2a447b05ea”,
“10.0.1.43:27017”,
“10.0.1.43:6379”);

Failed An exception occurred during job activation.
System.MissingMethodException

No parameterless constructor defined for this object.

System.MissingMethodException: No parameterless constructor defined for this object.
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
at System.Activator.CreateInstance(Type type, Boolean nonPublic)
at System.Activator.CreateInstance(Type type)
at Hangfire.JobActivator.ActivateJob(Type jobType)
at Hangfire.Common.Job.Activate(JobActivator activator)

I’ll record more information as I learn what’s going on here. Perhaps someone could advise me if I am heading on the wrong direction. Or, Hopefully, as I figure this out, it may be helpful to others.

If I Enqueue a Static Class of the background job class to be executed, there is no issue with JobActivator. If I enqueue a method that calls the Static Class or class instance of the job to be run, the method invokes the class that contains the Enqueueing method as the background method to be run. In retrospect, this makes sense. And this causes the exception because the Enqueuing class, and not the called class, is the class that gets deserialized and instantiated by the background job server from the JobStore. So clearly, whatever is enqueued determines the background class to be saved to the JobStore that is then deserialized by the backendjobserver to be executed. So why it failed is beginning to fall into place.


This is taking on the posture of a blog. Sorry about that.

I figured it out. The background job is created from the Enqueue method, so that’s the line of code that needs to set up the background job class. By creating an instance of the class, the JobActivator still takes place in the BackgroundJob (vs a Static Class which just executes the class in the background without JobActivator). With this finally working the way I wanted, I now need to get sequential processing to work across the SignalR Scaleout WebFarm. If this doesn’t work, I am at a loss on how to make this happen. Below is the Enqueue code that sets up the background job processing, mediated through the REDIS JobStore. The added complexity is that this is all taking place through a SignalR implementation which provides the (light) run stack for the background job, and the background job has a Hub that can publish it’s events to any registered listeners - not just the calling code. This is important for the implementation.

Call Setup Code
BackgroundJob.Enqueue<KESConsoleRunner.KvmBackgroundJob>(m => m.SubmitJob(
RunString,
Startup.server.Instance.Address.ToString(), //mongodb
Startup.useSQLServer
? Startup.SQLServerConnectionstring
: Startup.RedisConnectionString
));

Here’s the Job Detail from Dashboard

using KESConsoleRunner;

KvmBackgroundJob kvmBackgroundJob = Activate();
kvmBackgroundJob.SubmitJob(
“E:\Transactions\Transaction_Run_20140814_Set-1(RevisedModels)|Transaction_Run_20140814_Set-1(RevisedModels)_KES|Transaction_Run_20140814_Set-1(RevisedModels)_DocType|5445e94e7f9dbf2a14744ad3”,
“10.0.1.43:27017”,
“10.0.1.43:6379”);

====

Later: And now I really am stuck with no combinations left to try. Each transaction runs without issue. But when I stack jobs in the queue for a server, it runs them in parallel, rather than sequentially as I attempt to do with the attribute [DisableConcurrentExecution(timeoutInSeconds: 10 * 60)] below.

Can anybody provide some much needed guidance?

BackgroundJob Class

namespace KESConsoleRunner
{
public class KvmBackgroundJob
{
public KvmBackgroundJob()
{
}

    [DisableConcurrentExecution(timeoutInSeconds: 10 * 60)]
    public void SubmitJob(string runstring,
       string mongoDBConnectionString,
       string JobStoreConnectionstring
        )
    {
        try
        {
            Program.ExecuteBackgroundJob(
                runstring,
                monboDBConnectionString,
                JobStoreConnectionstring
                );
        }
        catch (Exception e)
        {
            throw e;
        }
    }
}

}

I am new to asp.net Hangfire and SignalR, have been spending time reading through the documentation. I have a project that is similar to yours and have been very interested in your posts. I am a couple of weeks behind you with regard to being able to debug similar issues you are having. I very much appreciate your posts as they provide guidance and resolution to issues that otherwise don’t exist. I can see you have been “without guidance” working through your specific architecture issues. I will follow your lead and post issues as I work through them.

HI marks,

I’m pleased my posts are useful. Sometimes there is value to muddling my way through => I learn the underbelly. And by sharing that learning in these posts - other can learn as well. So that’s good feedback. Tx.

Currently I am trying to get some assemblence of Sequential WorkQueue processing to work on the background server, as opposed to my previous hack which precluded it on the client through the queue monitoring API. This will allow me to stack jobs in the queue. The intent was that the jobs would control themselves, by testing queue processing population on startup, and requeuing themselves if Processing state Count > 0. But it turns out, when the job is Deserialized and started, it is already in the Processing State. So I changed the test to Count > 1. But when I stack more than one job, and the 2nd job launches, where is a job already in the Processing queue, it seems the new job does not bump the processing count to 2. So I shelved that approach, and am now in a deep-dive debugging session of Hangfire itself to see why the Distributed Lock is not locking.

To set the debug session up, I am using the Snippet Highlighting App. When running in .NET 4.5 there are some issues which need to be resolved. I figured them out, so here is that guidance.

  • SQL Server Exception: In Visual Studio you need to turn off SQL Server Exceptions due to differences in Entity Framework Versions. The automatic EF migration is looking for the field “OnCreate” which does not exist in EF5. When the SQL error is thrown on a snippet query, click the link for Edit Exceptions and uncheck SQLClient Exceptions= in Visual Studio. This will allow the debug session to move beyond this point.
  • System.Web.MVC.Ajax: The second issue is that the IndexView, when generated, complains about System…Web.MCV.Ajax missing a reference. But it’s all auto-generated so no way to shim it in. The issue here is that the IIS DefaultAppPool is set to .NET 2.0. This effects IISExpress which you probably are using in your Visual Studio debug session (I’m using VS2013 Ultimate). Setting the DefaultAppPool to .NET 4.0 will resolve this issue. And the app then works without any other changes.

This is where I am now. I just resolved the above and am about to do a deep dive degug session into the internals of Hangfire to try to determine why the DistributedLock is not locking. I’ll update with my findings.


Later: I’ve been debugging Hangfire DistributedLock, and it is being set. It gets set for State Changes:

  • Recurring Job Lock: 15 minutes
  • State Lock: 15 Minutes
  • Transaction: 10 Minutes (which is what I set it to as shown in the code snippet below (the attribute is the Edit change I made to debug the DistributeLock). DistributedLock uses a StoredProcedure (“sp_getapplock”) which locks code, much like the (lock){…} statement in C#. It does not lock database tables or rows, but does use SQL underlying code to apply application locks to code instead of tables. In this case, it’s locking the ability to change queue state. What is needed is to limit each NamedServer (VM) to 1 Job at a time. I’m not sure DistributedLock is the way to do that - it may not provide enough control such that jobs can stack and rescheduled, but only one job can be processed at a time in queue.Processing state. It could be that by limiting the number of worker threads to 1 per Server, Sequential Processing can be enabled. I tried this a week or so ago, but it did not constrain processing. I’ll set up an app where I can stack Jobs while debugging Hangfire internals, to see if I can figure out why it does not limit Processing state to one job (in sequence). If I can’t get that to work this time, it could be a combination of the two is required. Else, it may require the ability to inject declarative constraints into Hangfire (enum ProcessingQueueType.[Sequential,Concurrent]) …but that would require adding some code to Hangfire internals, which I would rather avoid if possible. Can anyone provide some guidance on this?

[HttpPost, ActionName(“Create”)]
[MultipleButton(Name = “method”, Argument = “BackgroundJob”)]
public ActionResult CreateBackgroundJob(Snippet snippet)
{
if (ModelState.IsValid)
{
snippet.CreatedAt = DateTime.UtcNow;
_db.Snippets.Add(snippet);
_db.SaveChanges();
BackgroundJob.Enqueue(() => HighlightSnippet(snippet.Id));
return RedirectToAction(“Snippet”, new { id = snippet.Id });
}
return View(“Create”, snippet);
}

[DisableConcurrentExecution(timeoutInSeconds: 10 * 60)]
public static void HighlightSnippet(int snippetId)
{
using (var context = new HighlighterDbContext())
{
var snippet = context.Snippets.Find(snippetId);
snippet.HighlightedSource = HighlightSource(snippet.Source);
snippet.HighlightedAt = DateTime.UtcNow;
context.SaveChanges();
var hubContext = GlobalHost.ConnectionManager.GetHubContext();
hubContext.Clients.Group(SnippetHub.GetGroup(snippet.Id))
.highlight(snippet.Id, snippet.HighlightedSource);
}
}