Hangfire Pro: how to combine AwaitBatch, ContinueWith, Attach and Nested Batches


we are trying to setup a complex chain of batches and jobs but we can’t figure out how to make it work.

I will try to explain our situation:
Because of our business requirements our goal is to see this structure of batches and jobs in the hangfire GUI:

    - job1     (this job gets the parentBatchId as a parameter and will attach all jobs and batches below to the parentbatch)

    - job2
    - nestedBatch1    (should start after job2 finished)
           - nestedJob1
           - nestedJob2
           - nestedJob3
           - ... thousands more nestedJobs 
    - job3   (should start after everything in nestedBatch1 is finished)
    - job4   (should start after job3 is finished)

All these jobs belong together business-wise, so we would like to see them all in the same parentBatch.
We have tried to make this work, however, job3 and job4 are not waiting for everything in nestedBatch1 to finish.

The parentbatch is started like this:

BatchJob.StartNew(batch => batch
		.Enqueue<Job1>(job => job
		.Execute(batch.BatchId, null)));

Job1 receives the parentBatchId as a parameter. So it knows how to attach/enqueue all the rest to the parentBatch

BatchJob.Attach(parentBatchId, batch =>
	var job2       = batch.Enqueue<SomeJobClass>(job => job.Execute());

    //create empty nestedbatch and attach all jobs later (enqueueing them all at once during StartNew throws exceptions due to SQL transaction/timeout problems (too many jobs at once)
	nestedBatch    = batch.StartNew(b => EmptyMethod(), "childbatch"); 
	continueWithId = batch.ContinueWith(job2, () => EnqueueManyJobsInNestedBatchUsingAttach(nestedBatch));

	//all the jobs below should start only after everything in nestedBatch is finished. However they are enqueued too early
    BatchJob.AwaitJob(continueWithId, b =>    
        BatchJob.AwaitBatch(nestedBatch, b2 =>
	    	var job3 = batch.Enqueue(() => Job3());
	    	var job4 = batch.ContinueWith<Job4Class>(job3, job => job.Execute());				

We want to enqueue thousands of jobs within the nestedBatch.
We are running into SQL timeout and transaction problems if we Attach and enqueue them all at once.
Because of this we are slicing them in groups of 500

//method EnqueueManyJobsInNestedBatchUsingAttach (used in above configuration)
modelsForWichToEnqueueJob.Batch(500).ForEach(slice =>
	BatchJob.Attach(nestedBatchId, batch =>
		slice.ForEach(modelId =>
			batch.Enqueue<SomeJob>(job => job.Execute(modelId);

The syntax for Hangfire Pro (attaching, awaiting, combining batches and jobs) is very confusing to us. It is very unclear to us what is executed when, and which methods create their own job (i.e. to wait or another job to complete).
We have read through the existing documentation but all the examples are very basic.


  1. Is it possible to document some more complex use cases? Are there any courses available for hangfire pro?
  2. Do you have any pointers on how to correctly implement the above?

One solution could be to drop our requirement to have all these jobs grouped together under 1 parentbatch. However, given the amount of jobs we would absolutely prefer to have them grouped together and this is one of the reasons we chose to upgrade to hangfire pro.

Hope you can help us.
Thank you in advance!

It is possible that the child batch will be finished before any of its additional background jobs are created, when EmptyMethod job is completed. This will lead to an early continuation invocation. Instead, it’s better to create the whole structure of a batch in one place, and create the nested one with the EnqueueManyJobs method call directly:

class Program
    static void Main(string[] args)

        using (new BackgroundJobServer())
            BatchJob.StartNew(batch =>
                var job1Id = batch.Enqueue(() => Console.WriteLine("Job1"));
                var job2Id = batch.ContinueWith(job1Id, () => Console.WriteLine("Job2"));
                var nestedBatchId = batch.AwaitJob(job2Id, nestedBatch => nestedBatch.Enqueue(() => EnqueueManyJobs(nestedBatch.BatchId)));
                var job3Id = batch.AwaitBatch(nestedBatchId, () => Console.WriteLine("Job3"));
                var job4Id = batch.ContinueWith(job3Id, () => Console.WriteLine("Job4"));


    public static void EnqueueManyJobs(string batchId)
        BatchJob.Attach(batchId, batch =>
            batch.Enqueue(() => Console.WriteLine("Nested Job 1"));
            batch.Enqueue(() => Console.WriteLine("Nested Job 2"));
            batch.Enqueue(() => Console.WriteLine("Nested Job 3"));
            batch.Enqueue(() => Console.WriteLine("Nested Job 4"));

If you are using full .NET Framework (not .NET Core), you can also try to minimize the number of round-trips required to commit a transaction by using the new CommandBatchMaxTimeout configuration option (appeared in 1.6.17). This will leverage the SqlCommandSet that flushes all the writes at once with a single network call:

    new SqlServerStorageOptions { CommandBatchMaxTimeout = TimeSpan.FromMinutes(1) });

Although this is a useful optimization, if the number of jobs may vary to huge values, the Attach method should be used from a background job that’s executed from the same batch as background jobs to be attached.

Thank you for your very swift response!
Your example code pointed us in the right direction.
We now have something working close (enough) to our original goal.

It is very important to understand which methods return jobId’s (numbers) and which methods return batchId’s (guids).
One suggestion from our team might be to model this difference explicitely (i.e. modeling them as a JobId and BatchId objects). In our opinion this would make it a lot more intuitive to use.

Best regards,

Sure, the ContinueWith method also adds some confusion. Returning special objects instead of strings will definitely make things better and even allow to use fluent API for batch and job continuations. But this is a breaking change, and we can’t make it in 1.X without breaking all the applications.