OK. I have figured out a hack to get this to work.
As I stated earlier, I am using a strategy pattern to run different parts of each job. I have a JobStatus called Handoff and now do this:
public class Processing : BaseJobExecutor<PayloadModel>, IJobExecutor<PayloadModel>
{
public Processing(JobPingPong job) : base(job, JobStatus.Processing) {}
public void Handle()
{
JobInfo.JobStatus = JobStatus.ExtProcessing;
JobInfo.HangfireParentJobId = JobInfo.HangfireJobId;
Payload.PostToQueueText(@"http://localhost:8080/aws/clone");
// Pause the current job (this is the parent job) so the outside web service has a chance to complete...
var enqueuedIn = new TimeSpan(0, 6, 0, 0); // 6 hours out...
JobPutOnHold(JobInfo.HangfireJobId, enqueuedIn);
// The next status to be executed upon hydration...
JobInfo.JobStatus = JobStatus.Complete;
Job.CachePut();
// Signal the job executor that this job is "done" due to an outside process needing to run...
JobInfo.JobStatus = JobStatus.Handoff;
}
}
public void JobPutOnHold(string jobId, TimeSpan enqueuedIn)
{
var jobClient = new BackgroundJobClient();
jobClient.ChangeState(jobId, new ScheduledState(enqueuedIn));
}
Now, in the strategy executor I can do this:
public string Execute(IServerFilter jobContext, IJobCancellationToken cancellationToken)
{
while (Payload.JobInfo.JobStatus != JobStatus.Done)
{
cancellationToken?.ThrowIfCancellationRequested();
var jobStrategy = new JobExecutorStrategy<TPayload>(Executors);
Payload = jobStrategy.Execute(Payload);
if (Payload.JobInfo.JobStatus == JobStatus.Handoff)
break;
}
return PayloadAsString;
}
The 2nd part of the job fires off the same as the 1st part but comes in from the outside service with an ExtComplete status, which allows the job to execute the post processing based on the results from the outside world (stored in the DB). Like this:
public class ExtComplete : BaseJobExecutor<PayloadModel>, IJobExecutor<PayloadModel>
{
public ExtComplete(JobPingPong job) : base(job, JobStatus.ExtComplete) { }
public void Handle()
{
// do post processing here...
Payload.Tokens = null;
JobInfo.JobStatus = JobStatus.Complete;
if (JobInfo.HangfireJobId != JobContext.JobId || JobInfo.HangfireParentJobId == JobInfo.HangfireJobId)
{
JobInfo.HangfireParentJobId = JobInfo.HangfireJobId;
JobInfo.HangfireJobId = JobContext.JobId;
}
// Enqueue the previous (parent) job so it can complete...
JobExecuteNow(JobInfo.HangfireParentJobId); //, JobInfo.JobQueueName);
}
}
public void JobExecuteNow(string jobId)
{
var enqueuedIn = new TimeSpan(0, 0, 0, 15);
var jobClient = new BackgroundJobClient();
jobClient.ChangeState(jobId, new ScheduledState(enqueuedIn));
}
Eventually, the timing will be config driven, but for now I am setting it to have the 1st job pick up execution in 15 seconds.
The only challenge I faced with this approach is the job payload that comes in is the original payload before any processing happened. That is why you see the “caching” up above. When the job restarts I check to see if a cache exists for that Hangfire JobId, if it does, load up the last known payload from cache then allow the executor to go on its merry way.
Works very well so far.
NOTE: I am still trying to learn how to alter/inject the Chain of Command and State Objects in Hangfire to make this more internal to hangfire. We have one job that makes a dozen or more outside calls. Currently, it takes about 12 hours to run.
@odinserj - do you have any sample code on how to add new States and alter the pipeline then inject it correctly? It would also be nice to be able to alter the call within hangfire to reflect the changes to params originally passed in. TIA!