Hangfire.Redis clean up

When using Hangfire.Redis, how do I configure how often things get cleaned up?

I’ve just had a redis server run out of memory, and when I checked, the HF database had > 1.6 million keys in it.

I’m encountering the same issue. Which version of Hangfire.Redis are you using ?
I made a port of the original Hangfire.Redis to use StackExchange.Redis instead of ServiceStack one, (a beta is available here) and I’ve seen that while SucceededJobs and FailedJobs Redis List get arbitraily trimmed at 100 items, the related Job doesn’t get deleted, ever.
I’m planning to delete all of the job that are not in one of there lists and trim to 100 the history of any “Recurring Job”, but I’m interested in hearing any thought

I use the official Hangfire.Redis, I have paid for the Hangfire Pro subscription, which is why I hope to hear from @odinserj on this.

He’s usually very quick to answer on here.

How many failed jobs do you have? Or, it even better to post screenshot of your dashboard (its navigation menu with number of jobs) if you can do it.

P.S. It is best to write to “support at hangfire.io” email address to get support ASAP, because it is the only way issue ticket is being registered in help desk and marked as a high priority letter.
P.P.S. Unfortunately last two months I rarely visit the forum (implemented big changes both for free and paid versions), but hope things will be changed in April.

Hey guys and @odinserj

Did you find the solution for this problem?

We’re getting exceeded memory error on hangfire when trying to put jobs in redis. Accessing the redis db, I see that we’ve about 1M jobs there, so I clean up manually and we’ve got back on business.

This issue is making us to lose critical jobs, that aren’t scheduled because of the memory fault.

We would appreciate any help.