Hi Sergey,
I
debugged a long running job split in a parent and a sub method it was
failing. during the debug process the parent method was called again,
which is causing the problem in first place i think.
how can i avoid this?
regardsSergey Odinokov 3:12 PM (10 minutes ago)
to me Hi, Niko,
I need to get the reason of retry. Did you break your debugging session
that lead to job abort? What is the duration of your job’s performance?
i am so happy to see more people like me coming to this awesome project
i face this kind of issue with other stuff in VS, i don’t know the reason, but maybe as said due to debug session being broken, did you check the monitor and see what error is there or if the job is still in equeue?
Thanks, @kozalla for using HF and forum. I’m sure that everybody should know about difficult cases.
Automatic retry is an essential feature of HF to process jobs in a reliable way inside ASP.NET. You can read more about it here. In short, your jobs should be ready for retry.
However, it is applied only when:
- The process was terminated (killed through task manager or breaking the debug session).
- AppDomain was unloaded (asp.net recycling).
- Job invisibility timeout exceeded. Recently I changed it from 30 minutes to 5 minutes, and this change may be the reason in your case. But it was a mistake that should be fixed.
That is why I asked you about job duration and debugging session destiny. Without your answer I can’t answer you
1 Like
The Job runs without an error an the parent method is called again after 5 minutes.
So it could be your timeout.
That totally explains the EF DB Update exception because the entitys were messed up during two parallel runs…
Sorry I was trying to write you, but a meeting stopped me…
Yeah it is awesome indeed
Aha, this is exactly invisibility timeout. You can tune it in the following way (I’ll set it to 30 minutes back in 0.8.1, #90):
var options = new SqlServerOptions()
{
InvisibilityTimeout = TimeSpan.FromHours(1) // or whatever you need
};
JobStorage.Current = new SqlServerStorage("<connection string>", options);
You will be able to forget about this timeout with HangFire 0.8.1 and MSMQ support, but it is not released yet.
Thanks for good words!
1 Like
Yes it was the invisibility timeout.
Thanks a lot so far
Niko
Niko, you can also use the MSMQ support introduced in HF 0.8.1. It uses transactional queues that roll back automatically after ungraceful shutdown, so there is no need for InvisibilityTimeout at all. This will remove the risk at all.
In most situations, there is no need for removing this risk. However, if you can not allow your jobs to be processed after 30-minutes timeout in a case of process termination, or after forced AppDomain unload (that is performed after ShutdownTimeout exceeded that defaults to 30 seconds), you may choose to use MSMQ.
and if i have a long running task that takes 2 hours, do i need to set the InvisibilityTimeout to at least to hours then?
i see very often that the task is aborted, not sure why.
Did you get answer for this or did you find a solution? I am facing the same challenge.