I want to have an ability to pass custom parameters to scheduled job outside of parameters already part of the methods. A lot of our code uses context info i.e. user/tenant/ip more like HttpContext each job has a context it is running in. I am not sure how to implement such a thing in Hangfire. We could possibly change signature of each of the method to add parameters to accept context specific info but that is too much change just for a simple job param to be passed around which is pretty much common across all jobs.
for e.g. RunReports(reportId) job needs context of which tenant it is running in and hence pull appropriate records (tenant drives connection string which is used to drive which db the data is pulled from).
Even there are other scenarios where just passing a custom parameter to change how job is executed is needed and may not really drive change is method logic. e.g. I want to wrap job in a Sql transaction based on a param and it does not change the code in job (which can run with or without transaction).
If you take traditional command line interface there are a lot flags like logging level that is nothing to do with job but more of an infrastructure level parameters that needs to passed on to scheduled job interface to change the behavior how jobs are run.
I understand the idea of it being simple but extensiblity should also be possible.
If source of the flag or custom parameter is when you invoke a job, how DI container solve the issue, I am failing to understand. JobActivator/JobActivatorScope is responsible for creating the job so adding something to a job to container needs to happen there and in that context the only thing you have is Job with parameters again catch 22. I am not sure if you have actually done something like this or not. If you have please forward me example and I am more than happy to look how you can achieve this.
If it is matter of setting one flag that applies universally then job is easy but when you have a job executing for a specific tenant that is like header information different on each request and has to be determined or passed during runtime.
Yes, I have a few jobs which need to be run under the authenticated user and I have registered a dependency which will allow to inject by UserContext class which contains a few flags (determined in code or appsettings file).
For me, a Job-class which then gets UserContext injected when doing BackgroundJob.Enqueue<MyJob>(job => job.Execute()); appears to work reasonable well for me. In Autofac you can use something like https://docs.autofac.org/en/latest/advanced/multitenant.html#tenant-identification or InstancePerRequest and dig out the user id from the claims principal etc.
Yes, we do something similar and we have context resolver but currently we are planning to host job engine outside our ASP.NET Web API (OWIN) and hence would not have access request context while resolving job. I had to succumb to this restriction and created a new method on base class that call the method in implementing class. Again, not sure how this restriction of not allowing parameters to be set except by method signature achieves better design.
In my opinion having code read signature of method and derive parameters is cool and intuitive but restricting just to it does not make sense to me.
Ironically the library have ability to add job parameters but can only be accessed in filter and not when you are en-queuing the job.
I found a hack method of using queue name to pass my tenant parameter and in job filter ElectedState event to read out queue name and set tenant code before job is activated. But then dashboard was not behaving where it was en-queuing jobs into default queue instead of the queue it is suppose to go. Sigh!!!
Hi,
seems like I have similar problem We need to support tenant id for recurring jobs in Hangfire. I want to have this functionality available for Dashboard + Web API (so that Dashboard would see the same filtered by current tenant data). According to our specification, recurring job is created in client-side page: client data is sent to Web API where at first the entry in our table is created (Recurring job configuration) and then - HF entry, they are bound 1:1 (another service is called where RecurringJobManager is used). I had to workaround things for tenant id by implementing the CRUD operations in a way where tenant id suffix is appended to recurring job ID and when shown in UI - stripped (user does not have to know tenant IDs or any other info like that). And data is filtered by tenant id after they are retrieved from StorageConnection. The problem is that Dashboard does not know anything about Web API it uses calls to StorageConnection and gets ALL data from DB - for all tenants (sorry, I know you know that ). So, to have the same output picture for both Dashboard and Web API at first I decided to override StorageConnection classes (we use Oracle - 3rd party storage class for Hangfire ), but understood it’s too much work (this class uses plain dapper queries - a lot of them and they would require adding tenant parameter to LEFT JOIN…) So finally i’ve decided to store TenantId job parameter in job creating event and later deal with filtering but filtering out data based on HF_JOB_PARAMETER “TenantId” field. Was suprised filterContext.SetJobParameter does not actually save the parameter into DB, as you wrote here (does it just store somewhere?? what if connection gets dropped?) Seems like have to use another way…