.NET Core Hangfire high memory usage in Docker (AWS ECS Fargate)

We are using AWS ECS Fargate to host our web, api, and hangfire processes. They run under docker. Hangfire handles both ad-hoc and scheduled jobs. For the most part it works fine day to day. But one scheduled job at night destroys the memory usage (> 13GB on a 16GB instance) and bombs the Hangfire container.

Crazy thing is that when I run this setup locally, the memory usage (running the same schedule job on the same files) never gets above 2GB.

It feels like an environment difference to me (how linux and windows differ in memory usage or garbage collection in the runtime.)

@treygourley have you managed to resolve the issue? we are experiencing the same issue and we think moving to Core 2.1 to 3.1 or 5 might solve this.

Sorry for the delay in getting back with you. While we never definitively figured out why the difference between the two environments was happening on the same app, we did discover that AWS ECS images didn’t have swap memory enabled. So while our app was using 2GB of memory locally, it might have had several GB’s of memory swapped to the hard drive. In AWS ECS this just all stays in memory.

We found this out by logging the memory usage of the application at times during processing of a job. Here are the methods we used to output that information.

public static long ApplicationWorkingMemoryUsage()
    {
        return Process.GetCurrentProcess()?.WorkingSet64 ?? 0;
    }

    public static long ApplicationPrivateMemoryUsage()
    {
        return Process.GetCurrentProcess()?.PrivateMemorySize64 ?? 0;
    }

    public static long ApplicationVirtualMemoryUsage()
    {
        return Process.GetCurrentProcess()?.VirtualMemorySize64 ?? 0;
    }

What we ultimately did was try and make our jobs slimmer and try to use as little memory as possible. We also wrangled in EF Core and didn’t let its caching take up too much memory. We ultimately got the memory usage down to a manageable (non killed instance) amount. BUT… we still run on pretty good sized instances of 8GB or 16GB.

Secondary note… if you haven’t upgraded to at least .NET Core 3.1, all support for previous versions has been dropped by Microsoft at this point (i.e. .NET Core 1.1 and 2.1.) It wouldn’t be any use to upgrade to .NET 5 as its support will drop soon after .NET 6 comes out later this year.

Happy programming!

Hello. Have you managed to solve this issue? I have a similar issue. My setup is .NET 6, PostgresQL (accessed through IAM), and K8S. So I am facing the issue when my pod continuously increases memory usage until K8S restarts the pod because it ran out of memory limit. As a workaround, I have added caching of the generated token so it is allowed to increase the time to restart from 2 hours to 2-3 days. AWS says that RDSAuthTokenGenerator does not have any logic that may lead to memory leaks. Does anyone have any suggestions on how to fix this particular issue?
Here is the link to RDS issue: Experiencing memory leaks when using RDS Token assembly · Issue #1973 · aws/aws-sdk-net · GitHub