After reading from above mentioned link regarding degree of parallelism:
If i am not wrongly understand that Worker Thread excutes on next available core of CPU. Number of Worker thread(s) can be configure using BackgroundJobServerOptions. Another question arise on my mind i.e. number of worker thread will be created on server start event.
Is it possible to create worker thread dynamically (i.e. when i need to perform job only then i create worker thread and on completion of task dispose the worker thread).
I am not facing any problem right now , i am just doing fesibility study of Hangfire and other Job scheduler to use in our web / desktop application. Till now, i found Hangfire and TPL is most appropriate candidate. Below mentioned are my checklist:
Should be scalable.
Should be configuration (i.e. Responsibility and Job can be added using config file, number of worker suld be equal to x = (Environment.ProcessorCount * 5) or x = (number of Jobs) configured whichever is less).
Robustness (i.e. ESB (Enterprise Service Bus) compatible, Other systems, in an enterprise, can use for background job processing seamlessly)
Testable framework (i.e using of Mocking and IoC for Jobs / Responsibilities).
Hope this will give you more appropriate detail regarding what i am looking for.
I would advise that you come up with a series of typical scenarios that you think you will need in your project as I think you will then find it easier to compare different frameworks and libraries.
The reason why I ask is because Hangfire and the TPL are both aimed at solving different problems. Hangfire is designed to handle persisted job queuing and distributed processing while the TPL is more typically aimed at performing ad-hoc parallel processing on the same machine.
I can go through your checklist and give some indication on each point and make some assumptions, but like I say the two are not entirely comparable.
Both are scalable. The TPL parallelises tasks by either executing them sequentially or on a threadpool that is proportional in size to the number of cores - In this case you are ultimately limited by the number of cores that you have. Hangfire processes tasks both across cores within a single machine and also across multiple servers - In this case you are limited by both the number of cores / machines and the performance of the job storage engine / database / Redis. Whether or not one or the other is suitable depends upon what you intend to do.
Number of workers is adjustable for both, but I don’t quite understand why you would want to adjust the number of workers to be related to the number of jobs - Surely this is a variable number ? If you have more workers than jobs, they will just sit idle, periodically checking for new jobs.
This is somewhat covered under point #1, to answer this further you would need to know what your exact requirements are for this. I think the only way to truly know is to benchmark it with a representative workload. On a single machine, the TPL is likely to be more performance but Hangfire can scale out across a server farm if need be.
Hangfire is designed around robustness and guarantees that your jobs will be run at least once. The TPL does not offer persistence or robustness of jobs unless you write this part yourself. You are likely to end up writing something similar to what Hangfire does in order to make TPL job processing robust.
Ultimately both just run the methods you ask, so there’s no reason why your jobs can’t be fully testable using either system. Hangfire supports dependency injection containers out of the box but the TPL would require you to handle this yourself.