Running up to N tasks on each of 1000 queues simultaneously

I have a slightly unusual requirement for a task scheduler, and I’d like some advice as to whether HangFire is an appropriate option, and what approach might be used to implement the architecture.
I have a set of 1000 cloud pools, each containing 500 items, and each pool is only capable of running N jobs simultaneously.
I need to ensure each pool is kept stocked up with N jobs as long as there are any awaiting processing for that pool.
Obviously a single queue approach won’t work, as one task for a full pool will block the other 999 pools from being offered jobs.
One option is to have 1000 queues, with a worker listening to each one, each defined with ‘N’ Worker instances. In theory every time one finishes a job it will look in its own ‘pool’ queue for another for that pool, and hence self-throttle. Would that require 1000 BackgroundJobServer instances?

Could I could spin up 100 BackgroundJobServer, each handling ten pools, each with (N x 10) workers each of which is offered jobs with a specific pool?

I’m very open to suggestions here, and I’m happy to look at Business/Enterprise versions if necessary.