We are in the process of switching to Hangfire for our production use. We manage communication schedules and other activities in a SaaS environment. We are looking for feedback on the best way the handle thousands/tens of thousands of scheduled jobs.
To keep things simple we will use the following scenario: During a clients business hours we need send check every X (15) mins for new data and send an email out if a specific action occurred. Each client may have N (10-50+) jobs that also happen on a schedule with a unique time window to that customer. Therefore in this scenario, a single customer can have 10+ scheduled reoccurring jobs each with a unique schedule.
When we scale this to thousands of offices we are looking at tens of thousands up to hundreds of thousands of reoccurring scheduled jobs.
option 1: Setup every task all up as there own scheduled job with the given timezone.
option 2: Setup 1 job for all the possible tasks per timezone this would set the max jobs at around 1,850 (37 Local Times X max jobs let’s call it 50) These jobs would then have to query the database every to see who is in that timezone and runs any job right now.
option 3: 1 job per client, that runs every 5 mins to check to see if there is anything to do.
option 4: one job “action” then when it runs it checks what clients have jobs to do
Are there best practices in place for this kind of load and scaling?
Currently we are on SQL Server, however, we will likely move to Hangfirepro with Redis before the end of the year (if that matters to your best practices)