Enqueuing a large number of jobs

My application sometimes needs to enqueue a large number (~100s) of jobs.

Doing so seems to put SQL Server under a lot of stress, which then leads to other queries my app makes resulting in timeout exceptions.

Is there a better way to enqueue a lot of jobs, or somehow throttle hangfire to prevent it hammering SQL Server so much?

Unfortunately you are probably running into limitations in SQL performance. We switched to the Pro version and use redis instead of SQL. We can queue up 1000’s of jobs, no problem.

If your database is on a different server then you might be running into bandwidth limits as well as your incoming requests compete with your sql activity.

I was able to just use a LocalDB sql database for Hangfire, with just the LocalDB installation and no sql service and that worked better than I expected it to. I was able to push through 30,000 requests in an hour which enqueued 60,000 jobs. I had a hard time submitting real requests with data any faster than that though.

My API just enqueues the job and moves on to reduce response time as much as possible. It can still timeout occasionally due to local network conditions or problems with the virtualization environment so I actually have a queue in the calling application for the api calls. If there’s a timeout in the API submit then the request is resubmitted on a schedule. So, even if the API service goes offline nothing is lost.