Viewing Processing Jobs throws NullReferenceException

We have processed over 130 million jobs and suddenly we are not able to open the processing jobs screen. We are using Hangfire Pro 2.1.2. Here is the error.

[NullReferenceException: Object reference not set to an instance of an object.] Hangfire.Pro.Redis.<>c.<ProcessingJobs>b__10_1(KeyValuePair2 x) +7
System.Linq.EnumerableSorter2.ComputeKeys(TElement[] elements, Int32 count) +136 System.Linq.EnumerableSorter1.Sort(TElement[] elements, Int32 count) +33
System.Linq.d__1.MoveNext() +185
System.Collections.Generic.List1..ctor(IEnumerable1 collection) +481
System.Linq.Enumerable.ToList(IEnumerable1 source) +68 Hangfire.Pro.Redis.RedisMonitoringApi.ProcessingJobs(Int32 from, Int32 count) +635 Hangfire.Dashboard.Pages.ProcessingJobsPage.Execute() +391 Hangfire.Dashboard.RazorPage.TransformText(String body) +30 Hangfire.Dashboard.RazorPageDispatcher.Dispatch(DashboardContext context) +88 Hangfire.Dashboard.<>c__DisplayClass1_1.<UseHangfireDashboard>b__1(IDictionary2 env) +602
Microsoft.Owin.Mapping.d__0.MoveNext() +461
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__7.MoveNext() +197
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Security.Infrastructure.d__5.MoveNext() +735
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Security.Infrastructure.d__5.MoveNext() +735
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Security.Infrastructure.d__5.MoveNext() +735
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Security.Infrastructure.d__5.MoveNext() +735
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Security.Infrastructure.d__5.MoveNext() +735
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__7.MoveNext() +197
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +68
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__12.MoveNext() +192
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.StageAsyncResult.End(IAsyncResult ar) +117
System.Web.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +510
System.Web.HttpApplication.ExecuteStepImpl(IExecutionStep step) +213
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +172`

But could you tell me what version of Hangfire.Pro.Redis package do you have? Since 2.0.0 there are two independent packages – Hangfire.Pro with batches and Hangfire.Pro.Redis with Redis support.

We are using hangfire.pro.redis 2.5.1

Could you also tell me whether this problem is gone after refreshing the page once or multiple times? If it persists and annoying could you show me the output of the “INFO” command issued to your Redis instance – I’d like to check the maxmemory-policy setting you are using?

It never goes away even after many multiple hard refreshes
Here is the redis info:

"# Server
redis_version:4.0.11
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:4bc3aa215ca9cf78
redis_mode:cluster
os:Linux 4.15.0-34-generic x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:7.3.0
process_id:109558
run_id:0c802287e940aabfececbaccca1ccfdda4a602a6
tcp_port:6379
uptime_in_seconds:39830776
uptime_in_days:461
hz:10
lru_clock:16643828
executable:/usr/bin/redis-server
config_file:/etc/redis/redis.conf

Clients

connected_clients:39
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

Memory

used_memory:537404888
used_memory_human:512.51M
used_memory_rss:954118144
used_memory_rss_human:909.92M
used_memory_peak:17207944960
used_memory_peak_human:16.03G
used_memory_peak_perc:3.12%
used_memory_overhead:46122008
used_memory_startup:1458328
used_memory_dataset:491282880
used_memory_dataset_perc:91.67%
total_system_memory:16873684992
total_system_memory_human:15.71G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:17179869184
maxmemory_human:16.00G
maxmemory_policy:noeviction
mem_fragmentation_ratio:1.78
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

Persistence

loading:0
rdb_changes_since_last_save:41135
rdb_bgsave_in_progress:1
rdb_last_save_time:1593702069
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:5
rdb_current_bgsave_time_sec:2
rdb_last_cow_size:14700544
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:5
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:18206720
aof_current_size:446271371
aof_base_size:393940654
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:7412

Stats

total_connections_received:26413
total_commands_processed:9241716538
instantaneous_ops_per_sec:2744
total_net_input_bytes:1678675381123
total_net_output_bytes:1039329741342
instantaneous_input_kbps:427.05
instantaneous_output_kbps:356.43
rejected_connections:0
sync_full:8
sync_partial_ok:0
sync_partial_err:8
expired_keys:78953337
expired_stale_perc:3.44
expired_time_cap_reached_count:35
evicted_keys:0
keyspace_hits:1484951777
keyspace_misses:216859831
pubsub_channels:7
pubsub_patterns:0
latest_fork_usec:95231
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

Replication

role:master
connected_slaves:1
slave0:ip=192.168.10.160,port=6379,state=online,offset=213246616134,lag=1
master_replid:6b1f57eda2f71a99699125e7410b512585436601
master_replid2:2b2a2b0fc71ab89b57711350a1135015cd56448d
master_repl_offset:213246837390
second_repl_offset:1052017
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:213245788815
repl_backlog_histlen:1048576

CPU

used_cpu_sys:325377.00
used_cpu_user:340954.03
used_cpu_sys_children:407288.41
used_cpu_user_children:2994118.00

Cluster

cluster_enabled:1

Keyspace

db0:keys=422689,expires=372012,avg_ttl=8895982
"

Thank you for additional information, now I’m sure nothing was evicted. I was able to catch two reasons for NullReferenceException – one was caused by unnecessary sorting which accessed possible non-existing properties, and another one was related to null “ServerId” value. I will release fix as 2.7.2 version next week.

Not sure exactly what happened but it is now working fine without any hangfire modifications. Maybe a message that was set to be removed was removed after a certain time frame?

Hm, maybe some record that caused the issue expired, yes. Nevertheless I’ve fixed two possible causes for NullReferenceException in Processing Jobs page and released Hangfire.Pro.Redis 2.7.2 yesterday. I don’t think it will solve everything, but at least will show the actual reason for this problem.