Ordering of job processing

This looks like a very neat project. I’m curious if there is some mechanism to enforce the processing of jobs in a particular order, for example… what if an ‘update table’ type job fails and gets re-tried after a subsequent update? The table would reflect the update from the 1st failed job, when the latest and greatest data should have been from the 2nd job. Or… is there some way to determine whether or not the current executing job is in the context of a re-try, then the logic to handle the ordering could be put in the job itself… ie, if this is a re-try, then go back to the processed jobs, and if any of those jobs acted on the same record, re-queue/re-process them all in the correct order?

1 Like

Hello, @twheelmaker! You can’t specify an order of background job processing, as you can’t tell the end users to contact with each other and make requests in a special order – you system should be prepared for unordered execution.

You should take into consideration of this case and use transactions to make your background jobs atomic. If you can’t do that, consider making your background job methods reentrant and idempotent by adding conditional statements, for example:

if (!db.RecordUpdated(id))

You can also look my blog post about background processing (written in crappy english) and difficulties related to the topic.

1 Like

thank you for the response. it’s clear you’ve put a lot of thought into the project and as it stands, this project can accommodate several scenarios of fire-and-forget for asynchronous jobs that you don’t care about the ordering. unfortunately, there are entire classes of projects where this kind of flexibility would be required, for example, many enterprise applications that use mq could be subject to this ‘out of order’ quirk for updated orders. in the medical field, hl7 messages can contain updates to patient data or updates to ordered procedures so any system trying to process those messages would have to retry updates on failure in the correct order. i had a quick look at the generated schema and it looks like hangfire does save the arguments for the enqueued job so it wouldn’t be impossible within the context of a job to detect that the affected record id was previously passed in within the past X hours/days and attempt to reprocess those before the current re-tried failed job. btw, your English is better than many native speakers, so don’t be so hard on yourself :smiley:

I’m not saying that Hangfire can fulfill every aspect of background processing. What are you talking about relates to more complex workflows. I’m planning to add support for continuations that can fill some gaps in background job processing and solve some problems related to ordering, but at the same time I don’t want to end with a complex enterprise solution like NServiceBus (it resides in another niche).

You can simply increment some field associated with affected record and see the value to determine retry count, it is a usual method and it does not limit you in database manipulations. Moreover, nothing prevents you to create one background job in another. There are some constraints, but they can be handled as well. Ruby on Rails people live with more simple solutions and work wonders with them.