We left Russia. We are against aggression and war with Ukraine. It is a tragedy for our nations, it is a nightmare

How to set differing timeout and retry values on instances of a single recurring job handler class

We have a series of recurring jobs that undertake a class of operation. The operation can take widely differing times to complete depending on the input data that it has to operate on.
We are using a single class to handle this operation. Something akin to this:

public class OperationHandler
{
  private readonly IProcessor _processor;

  public OperationHandler(IProcessor processor) 
  {
    _processor = processor;
  }

  public Task ExecuteAsync(JobArgs args, CancellationToken cancellationToken)
  {
    return _processor.Run(args);
  }
}

Is it possible to embed a timeout and/or retry policy in the JobArgs parameter and access that from a filter which would then set these values against the job that is created when the recurring job triggers?

Hangfire doesn’t really timeout. It has a sort of heartbeat value on the job and tracks if that stops getting updated. If it goes passed the configured time the job is considered abandoned and is re-added to the queue. I’m not aware of a way to intercept this and have Hangfire “wait longer.”

You can track a retry count against the job Id and just have it “succeed” if it gets to the desired limit. This only works if the limit is less than the number of configured retries in Hangfire.

Are the jobs calling a third party that is doing all of the work and the job is just waiting for a response?

Thank you for your response aschenta.

Yes, that is the use case here. I am just waiting for the completed response.

Currently, for these long running actions, if Hangfire determines that the job is abandoned (InvisibilityTimeout, I think), it then launches another one.

The issue I am facing is that InvisibilityTimeout is a global setting (we are using the MongoDb storage provider).

  • If I set InvisibilityTimeout to a high value then any short running jobs that do actually get stuck will be adversely affected
  • If I set InvisibilityTimeout to a low value then long running jobs are adversely affected

In reality, there are more timing categories than just “short” and “long” running and our consumer of our service over HangFire would like to be able to set the timeout in their supplied job parameters.

One thing I am considering is to set (global) InvisibilityTimeout to a large value and then within the actual job execution manage the timeout there. This route may work for the timeout but I am not sure how to manage supporting retries in the job parameters.

Thoughts?