Hangfire.CreateBatchFailedException on Redis backend

We got the following error when creating a batch with the Redis backend:

 ---> StackExchange.Redis.RedisServerException: ERR Protocol error: invalid multibulk length
   at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor`1 processor, ServerEndPoint server)
   at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor`1 processor, ServerEndPoint server)
   at StackExchange.Redis.RedisDatabase.ScriptEvaluate(String script, RedisKey[] keys, RedisValue[] values, CommandFlags flags)
   at Hangfire.Pro.Redis.RedisTransaction.Commit()
   at Hangfire.BatchJobClient.Create(Action`1 createAction, IBatchState state, String description)
   --- End of inner exception stack trace ---
   at Hangfire.BatchJobClient.Create(Action`1 createAction, IBatchState state, String description)
   at Hangfire.BatchJobClientExtensions.StartNew(IBatchJobClient client, Action`1 createAction, String description)
   at Hangfire.BatchJob.StartNew(Action`1 createAction, String description)

The batch had 22,065 jobs in it (7,355 to enqueue immediately, as well as 14,710 jobs awaiting those). It also had 3 jobs awaiting the completion of the batch.

We are using:
Hangfire 1.7.18
Hangfire.Pro 2.2.3.0
Hangfire.Pro.Redis 2.8.4.0

We are using Redis 6.0.5 in AWS Elasticache.

Thank you so much for reporting this, the behavior is caused by a non-documented feature in Redis and I’m already implementing a fix to automatically split the command stream to multiple commands and send them in a single transaction. I will release this fix as soon as possible.

Please apply the following option as a temporary workaround:

UseRedisStorage("connection_string",
    new RedisStorageOptions { UseLegacyTransactions = true })

@James_Baird I’ve just released Hangfire.Pro.Redis 2.8.5 that fixes the issue, please see https://www.hangfire.io/blog/2020/12/30/hangfire.pro.redis-2.8.5.html for details. Large LUA command is not split to several smaller ones to avoid hitting Redis limitations. Thank you for reporting this problem, now it’s possible again to submit large batches.

Great, thanks for the quick turnaround! We had worked around it by incrementally building a batch with BatchJob.Attach. We’ll revert that and upgrade.