Is it possible to queueing identical jobs but prohibit the parallel processing of them?
Not yet, but there is a workaround – you can use distributed locks, see the implementation example below).
But it is interesting for me, why do you need it? If it is really needed, we should incorporate this feature into HangFire using job filters like this:
UPD. The following attribute is already implemented in HangFire 0.8.2
[DisableConcurrentExecution]
public void Method()
{
// This method is being processed inside a distributed lock
}
And how would I use the distributed lock?
I think it would be very useful if you have some regular scheduled jobs which variable processing time. If a job is still running when he’s triggered again it will result in db conflicts.
Just like my problem with the invisibility timeout.
Sorry for the late answer. The question now has a simple answer – you can use the DisableConcurrentExecutionAttribute
filter to ensure that only one execution of a method is able to run.
I just saw the release. Thanks a lot for that. It’s a pleasure to see the project developing.
Meanwhile I implemented the distributed lock, but using the attribute is far more comfortable!
Yeah, I felt in love with .NET when I discovered the power attributes!