I have a need for main web application and a background task processor.
I have to allocate resources of the machine in this manner up to 90% of the resources to the web application up to 10% to the background processor.
If machine is not under heavy load then these numbers should scale up and down as required. If machine is under heavy load there should be a guarantee that web app will have most of the juice.
IIS 8 has this exact feature built in
I’m thinking of running my main web application in one apppool and host hangfire in another webapplication dedicated just for background process and the hostfire dashboard
Now my question. In the documentation it says:
“Hangfire uses its own fixed worker thread pool to consume queued jobs. Default worker count is set to Environment.ProcessorCount * 5. This number is optimized both for CPU-intensive and I/O intensive tasks. If you experience excessive waits or context switches, you can configure amount of workers manually”
The thread pool that is used is it somehow sourced from the IIS ? I’m thinking since it’s hosted in the appool process it can’t have more threads than appol process will allocate (which in this scenario would be limited depending on the throttling settings I’ve set up) ?
If it’s not the case can you suggest any other strategy to achieve what I’m after?
I think using IIS’s built-in throttling is a very reasonable solution.
You have to keep in mind what is happening when IIS throttles the CPU usage of a web application. The number of threads is actually irrelevant. To the best of my knowledge, what is happening is that IIS telling the Windows thread scheduler to limit the resources of a particular process or group of processes. You can have as many threads as you want, but the scheduler will limit the share of CPU time that they receive.
This is shown in the following image from your links:
In this, there are 5 separate processes but IIS is limiting the CPU usage of the entire group to 10%.
The “ThrottleUnderLoad” option should be perfect for your requirements.
Thanks for replying so quickly.
Yeah that distinction between CPU time and threads is important.
Now I’m wondering if I should override the default parallelization setting?
My app servers will have 8 core CPU
Hypothetically if there’s high load and my main my main application is allocated 90% CPU
and the secondary application (hosting the hangfire server) is getting 10% of CPU time
then with default Hangfire settings that 10% will be be shared among
8 * 5 = 40 worker threads
I suppose it’s difficult to answer if 40 is ok or not?
Since that would depend on how many jobs there are to process and how long each job will take?
And also the settings won’t stay at 90%/10% all the time - just when there’s high load.
So I should just leave the default then…?
I don’t think there’s much harm in having 40 extra threads. Setting the number of threads so that it allows maximum usage at the quiet times sounds like the best bet, but the best way to be sure would be to benchmark it with a representative set of jobs and load.
To be honest it doesn’t sound like extracting every little tiny bit of performance from the server is so important, otherwise I’d expect that you’d have separate job processing and web application servers. It all depends on your scenario and requirements though.