We are leaving Russia. We are against aggression and war with Ukraine. It is a tragedy for our nations, it is a nightmare

Very granular control over throttling and rate limits?

I’ve been using Hangfire for a lot of throttling/scheduling tasks… but I’m starting to find the need to be more granular with the control.

One obstacle I’m running into is the ability to have thousands of different phone lines (a collection of which changes at a whim), where each phone line has it’s own limit of X messages per minute allowed, and then X calls per minute.

Not all phone lines have the same rate limit. A given line (555-555-5555) may be allowed to send 10 messages per minute, but another (444-444-4444) may be allowed a thousand per minute. The “queue” has to be specific to the phone line, it can’t spill over it’s jobs to a different line.

Then I also face account-wide limits. Each line may be able to send X per minute, but account-wide there is also a limit regardless of the phone line.

Even more complicated when I factor in phone calls to/from these lines. A given line may be able to have X calls concurrently running on it, and it may only be allowed to place X calls per minute. Same concept as messaging but the rate limits here are independent.

What I want to avoid is to endlessly spam these third parties requests to send messages or calls, just to be met with “rate limited” responses… and then have to turn around and re-schedule/send them later.

Hangfire is nice in that I can have separate “queues” set in Startup for something, but it seems like I’d need thousands of separate queues (one for each number) and have to be able to add and remove them on the fly… so I’m not sure if it’s possible, or if Hangfire is the right thing?

I wouldn’t use Hangfire to track this at all. You can track various levels of rate limiting. Redis would be a good candidate for this.
Basic Rate Limiting Pattern | Redis.

You can take this concept and put it in pretty much any database. Just a matter of responsiveness.