Hello Hangfire team and everyone
We are using the following :
Storage is Redis.
We are having issue with creating our batch jobs with Attach or we modify a batch by adding jobs and nested batches to it like so
BatchJob.Attach(batchId, attachAction =>
The error is attached
or in text, stack trace is
Hangfire.CreateBatchFailedException: Failed to attach new work to the batch ‘’. Please see inner exception for details. —> StackExchange.Redis.RedisServerException: ERR Error running script (call to f_4d979841c834854f6164c64c5a5aeeadb52392e1): @user_script:15: user_script:15: too many results to unpack
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor
1 processor, ServerEndPoint server) at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor1 processor, ServerEndPoint server)
at StackExchange.Redis.RedisDatabase.ScriptEvaluate(String script, RedisKey keys, RedisValue values, CommandFlags flags)
at Hangfire.Batches.States.BatchStateChanger.Attach(BatchAttachContext context)
at Hangfire.BatchJobClient.Attach(String batchId, Action
1 attachAction) --- End of inner exception stack trace --- at Hangfire.BatchJobClient.Attach(String batchId, Action1 attachAction)
This happens when the number of jobs reaches a certain number, we have few cases where it happens but mostly it doesn’t when the number of jobs is fewer.
This is especially strange because we never encountered this error before we upgraded our Hangfire Pro packages. Before the upgrade, the errors were simply Client side timeout exceptions that would somehow succeed after retries. However after upgrading the packages and encountering the new error above, the jobs would no longer succeed no matter how many retries we do.
Please help us resolve this issue. Any help is greatly appreciated.