Ability to add custom parameters without it being part of method

sql-server
queues
recurring
Tags: #<Tag:0x00007f499bae4c28> #<Tag:0x00007f499bae4a98> #<Tag:0x00007f499bae4930>

#1

I want to have an ability to pass custom parameters to scheduled job outside of parameters already part of the methods. A lot of our code uses context info i.e. user/tenant/ip more like HttpContext each job has a context it is running in. I am not sure how to implement such a thing in Hangfire. We could possibly change signature of each of the method to add parameters to accept context specific info but that is too much change just for a simple job param to be passed around which is pretty much common across all jobs.

for e.g. RunReports(reportId) job needs context of which tenant it is running in and hence pull appropriate records (tenant drives connection string which is used to drive which db the data is pulled from).

Even there are other scenarios where just passing a custom parameter to change how job is executed is needed and may not really drive change is method logic. e.g. I want to wrap job in a Sql transaction based on a param and it does not change the code in job (which can run with or without transaction).

If you take traditional command line interface there are a lot flags like logging level that is nothing to do with job but more of an infrastructure level parameters that needs to passed on to scheduled job interface to change the behavior how jobs are run.

I understand the idea of it being simple but extensiblity should also be possible.


#2

Personally, I just inject these contextual info via Autofac DI :slight_smile:


#3

If source of the flag or custom parameter is when you invoke a job, how DI container solve the issue, I am failing to understand. JobActivator/JobActivatorScope is responsible for creating the job so adding something to a job to container needs to happen there and in that context the only thing you have is Job with parameters again catch 22. I am not sure if you have actually done something like this or not. If you have please forward me example and I am more than happy to look how you can achieve this.


#4

If it is matter of setting one flag that applies universally then job is easy but when you have a job executing for a specific tenant that is like header information different on each request and has to be determined or passed during runtime.


#5

Yes, I have a few jobs which need to be run under the authenticated user and I have registered a dependency which will allow to inject by UserContext class which contains a few flags (determined in code or appsettings file).

For me, a Job-class which then gets UserContext injected when doing BackgroundJob.Enqueue<MyJob>(job => job.Execute()); appears to work reasonable well for me. In Autofac you can use something like https://docs.autofac.org/en/latest/advanced/multitenant.html#tenant-identification or InstancePerRequest and dig out the user id from the claims principal etc.


#6

Yes, we do something similar and we have context resolver but currently we are planning to host job engine outside our ASP.NET Web API (OWIN) and hence would not have access request context while resolving job. I had to succumb to this restriction and created a new method on base class that call the method in implementing class. Again, not sure how this restriction of not allowing parameters to be set except by method signature achieves better design.

In my opinion having code read signature of method and derive parameters is cool and intuitive but restricting just to it does not make sense to me.

Ironically the library have ability to add job parameters but can only be accessed in filter and not when you are en-queuing the job.

I found a hack method of using queue name to pass my tenant parameter and in job filter ElectedState event to read out queue name and set tenant code before job is activated. But then dashboard was not behaving where it was en-queuing jobs into default queue instead of the queue it is suppose to go. Sigh!!!