We left Russia. We are against aggression and war with Ukraine. It is a tragedy for our nations, it is a nightmare

Memory increases after adding a job with large arguments


We are using Hangfire for background processing. We use it to sync documents. These documents are not added by reference, but added as base64 argument to the hangfire job. This can cause huge memory problems. I recognize that it is not very neat that we upload entire documents, but when I have a job with 10 documents attached and one document is 22MB, I expect that hangfire can handle this.
The strange thing is that Hangfire causes an increase of memory usage on my system (and production systems) up to 30GB (on production systems it went up to 70GB with a few large documents).

What stands out on the SQL Server is this query:

with cte as 
  select j.Id, row_number() over (order by j.Id desc) as row_num
  from [HangFire].Job j with (nolock, forceseek)
  where j.StateName = @stateName
select j.*, s.Reason as StateReason, s.Data as StateData, s.CreatedAt as StateChanged
from [HangFire].Job j with (nolock, forceseek)
inner join cte on cte.Id = j.Id
left join [HangFire].State s with (nolock, forceseek) on j.StateId = s.Id and j.Id = s.JobId
where cte.row_num between @start and @end

This one takes a lot of seconds to be executed and besides that it seems a lot of times executed if the previous one does not return, even if the number of queues is set to 0.


  • Create a job with 10 args of 20MB which takes at least a few minutes time to be completed
  • Put that job in progress
  • Set the queue workers-count to 0.
  • Restart application.
  • Memory will increase.

Can someone look into it?