I’m using Hangfire to make HTTP calls - I would like it to retry if it gets an error in the 500 range, but move the job to failed right away if it gets something in the 400 range. More generally, I think it would be handy to have the ability to signal to Hangfire that retrying won’t do any good.
Maybe this is a feature request? Maybe if my method throws the exception wrapped in a FatalJobException or something it would skip the retry?
2 Likes
Hi bweber,
Let me know, you are handling with this exceptions? If yes, automatic retries will work by default, but if are a unhandled exception, this job will direct for erros, can you show us your code or something more visible? Thanks!
My point is that, as far as I’m aware, there are only 2 possible options:
- Throw an exception and trigger retries (if configured)
- Handle the exception and the job will be moved to the “Succeeded” queue
I’m looking for the ability to instantly fail the job in certain scenarios.
Here’s my previous example with slightly more detail: I make a REST call using Hangfire.
- If I get something in the 200 range (OK, No Content), we’re good to go - the job succeeds no problem.
- If I get something in the 500 range (Internal Server Error, Gateway Timeout, etc.), it likely means that the server is down so I’ll want to retry later in case it comes back up.
- If I get something in the 400 range (Not Found, Bad Request, Method Not Allowed), I’ve given some bad input or have given it a bad path. I should not reasonably expect subsequent attempts to succeed, so I’ll want to move it to the Failed queue right away.
There’s no distinction for the last two cases currently.
You could use a Job Filter to add this functionality.
You should have a look at the Automatic Retry Attribute which is nearly what you want. (I.e. It can either re-queue or delete a job)
I agree this functionality would be useful. It would be nice to be able to throw a special exception for example a FailJobException that would fail the job and not retry it.