DisableConcurentExecution does not appear to work

Hello, I am unable to get the DisableConcurrentExecution attribute to work as I expect. I have defined a simple job like so:

public class NotificationController
{
    private readonly NotificationSender _notificationSender;

    public NotificationController(NotificationSender notificationSender)
    {
        _notificationSender = notificationSender;
    }

    [DisableConcurrentExecution(600)]
    public void SendNotifications()
    {
        _notificationSender.SendNotificationsDue();
    }
}

My method is invoked both through the front-end like so:

BackgroundJob.Enqueue<NotificationController>(controller => controller.SendNotifications());

And also registered as a recurring task like so:

RecurringJob.AddOrUpdate<NotificationController>("NotificationSender", controller => controller.SendNotifications(), Cron.Minutely);

Unfortunately, if the process lasts longer than a minute (which sometimes happens), another instance of NotificationSender is automatically added to the queue and immediately processed. After several minutes, there are many SendNotification methods running concurrently which can lead to race conditions in which multiple notifications are sent.

I have also tried adding the attribute to the controller as well but this had no effect. Also, this isn’t just limited to the recurring jobs; I have added a button to send notifications manually, and if I click it several times, Hangfire will also run many concurrent instances of SendNotification.

For now I’m going to hack around it by using a static variable that will immediately quit the function if it’s already set; fortunately we’ve only got one Hangfire node to start with but we will soon have many and my hack won’t work in that case.

I’ve tried investigating the source code to determine if there is a bug; I can see a lock is being inserted but I can’t see what checks for this lock. Before I try to debug closer though I’d like to make sure that I haven’t done anything wrong.

My system specs:

  • Hangfire 1.0.2
  • Hangfire SQLServer 1.0.2
  • Hangfire Ninject 1.0.0
  • Hosted within a console application

Aside from that, everything has been great with Hangfire, I was able to convert a legacy WCF MSMQ queue service app in about a day and a bit. Thanks for the top framework!

Hello, @steve. I’m writing you after three monthes, but better late than never :smile:

The DisableConcurrentExecutionAttribute is working fine, but there is some misunderstanding I want to eliminate. This class is a server filter that works during the job processing stage. So, the dashboard shows you that background jobs are being processed. But at the same time only one of your background job methods is being executed – other method executions are waiting for a distributed lock. Consider the following sample:

namespace ConsoleApplication5
{
    public class Startup1
    {
        public void Configuration(IAppBuilder app)
        {
            app.UseWelcomePage("/");
            app.UseHangfire(config =>
            {
                config.UseSqlServerStorage(@"server=.\sqlexpress;database=Hangfire.Mailer;Integrated Security=SSPI;");
                config.UseServer();
            });

            for (int i = 0; i < 10; i++)
            {
                BackgroundJob.Enqueue(() => Singleton());
            }
        }

        [DisableConcurrentExecution(1000)]
        public void Singleton()
        {
            var guid = Guid.NewGuid().ToString().Substring(0, 6);

            Console.WriteLine("{0} Started...", guid);
            Thread.Sleep(5000);
            Console.WriteLine("{0} Stopped.", guid);
        }
    }
}

After starting it, in the dashboard you’ll see that background jobs are being processed at the same time:

But when we look into a console output, it is clear that background job methods are being executed sequentially:

What happens if job takes longer than defined timeout in DisableConcurrentExecution?

I tested next code:

  public class HomeController : Controller
  {
    public string Run()
    {
      for (int i = 0; i < 2; i++)
        BackgroundJob.Enqueue(() => Services.Service1.Task());      
      return "ok";
    }
 }

  public class Service1
  {
    [DisableConcurrentExecution(timeoutInSeconds: 5)]
    [AutomaticRetry(Attempts = 0)]
    public static void Task()
    {
      string guid = Guid.NewGuid().ToString();
      string contents = guid + " started\n";

      System.IO.File.AppendAllText("D:\\logs\\file1", contents);
      System.Threading.Thread.Sleep(1000 * 10);
      System.IO.File.AppendAllText("D:\\logs\\file1", guid + " finished\n");
    }
  }

The second job returns of course exception

Hangfire.SqlServer.SqlServerDistributedLockException

Could not place a lock on the resource ‘HangFire:Service1.Task’: The lock request timed out.

Hangfire.SqlServer.SqlServerDistributedLockException: Could not place a lock on the resource ‘HangFire:Service1.Task’: The lock request timed out.
at Hangfire.SqlServer.SqlServerDistributedLock…ctor(String resource, TimeSpan timeout, IDbConnection connection)
at Hangfire.SqlServer.SqlServerConnection.AcquireDistributedLock(String resource, TimeSpan timeout)
at Hangfire.DisableConcurrentExecutionAttribute.OnPerforming(PerformingContext filterContext)
at Hangfire.Server.DefaultJobPerformanceProcess.InvokePerformFilter(IServerFilter filter, PerformingContext preContext, Func`1 continuation)

The desired is ok, but wouldn’t it be correct if after timeoutInSeconds distributed lock is released? Using hangfire 1.4.6.