Concurrency for scalability

I’m trying to optimize Hangfire for scalability and do some capacity planning accordingly, but I’m wondering a couple of things:

  1. What’s the ideal maximum number of workers to set for each server in Hangfire (as WorkerCount), considering that:
  • each job is both I/O and CPU dependent (i.e. get remote data file over HTTP and process its contents)
  • server is virtualized (VM/VPS), and dedicated for Hangfire
  • server has 4-8 GB RAM
  • storage is through Redis running on a separate virtualized server (2-4 GB RAM and SSD)

Sidekiq, for example, recommends not setting the concurrency higher than 50.

  1. Based on the above scenario, would it make sense to host multiple instances of Hangfire (in separate processes) on the same server?

  2. Also, is there any additional configuration or settings that can be made in Hangfire, Redis or Windows/Linux servers for improved performance and throughput?



Anyone have any suggestions?