Job restarts processing after 30 min (1.6.12)

I’ve been struggling to figure this one out. Currently, if any job runs longer than 30 min, it will restart processing with the same job id and not show that an error occurred. (which from the job standpoint there was no error). After this first restart of the job, it seems to be able to continue for as long as necessary (i’ve tested up to 6 hours).

I even tried with the simplest example:

public void TestSkipExecution(PerformContext context)
    {
        while (true)
        {
            Thread.Sleep(10000);
            context.WriteLine("and going");
        }
    }

I see a similar issue at Hangfire 1.5.3 restarts always long jobs after 30 minutes · Issue #514 · HangfireIO/Hangfire · GitHub but was resolved.

Looks like this is fixed by increasing the InvisibilityTimeout. Not really sure what it does or why it exists. I’m using redis BTW. I would hope to find a way this can be removed or find out exactly what it’s for.

invisibilityTimeout is not used since 1.5.x so would say it is a different issue. The related issue 514 was caused by some problem in Azure Database - the DB released the lock after 30 Minutes. Maybe the error was not fixed correctly and does not cover all edge cases. I suggest you provide more technical details about your infrastructure, the author could fix the error when it is reproducible.

Were you able to get past this, I’m running into a similar situation (I am on version 1.6.17)

Is there something in the Hangfire.BackgroundJobServerOptions we need to extend for long running jobs?