We have a process that creates multiple jobs at once (e.g. 12k), right now, we call Enqueue for each job in a loop. Our logic also creates multiple batches to attach these jobs into, and creates dependencies between batches and jobs.
This process can take about 4 minutes to create all the jobs. Is there a way to do this once for all the jobs/batches, and will that give us a net improvement on the performance?
As an example, Lets say I have this structure:
Intermediary Job 1 (Waits on Batch 1 completion)
Batch 2 (Waits on IJ 1)
IJ 2 (Waits on Batch 2)
Batch 3 (Waits on IJ 2)
Can I build all the jobs/dependencies in memory, and send a single command to create them all at once?
I don’t know of any way to do what you’re asking, but I wanted to ask why you wanted to do this? Seems odd that you could have 100 jobs waiting on another 100 jobs etc etc. Is the processing inside each Job too granular? What happens if you’ve queued up 12k Jobs and one of the Jobs in the first batch fails? Would there be negative consequences for treating those first 100 Jobs as a single Job?
When we initially set this up, we didn’t have ACE licenses, so its an attempt to create some level of throttling that would limit outgoing requests to a third party.
Each of the jobs performs a set of operations against a single object in our system. It can be relatively complex depending on the scenario, and we don’t want failures in one object to affect another. We got around failures by having the final attempt of the job actually mark the job as a success, this lets the batch complete and the next continue
We’ve actually got a licenses for ACE now, so we’re planning on using HF Throttling to replace the batching. What this means is that we’re dump all the jobs into a single batch - we need to investigate, but we’re hoping that doing so will improve the performance of the job creation overall. If not, then we’ll need to move job creation into an Init job - that’ll let us take the load off the request itself.