Later
Adding to the evolution of the Hanfire solution, I now have VMs lighting up around the network, processing Transactions submitted from the Client MVC5 Signal R Application to Hangfire. This is very cool!!! SignalR is very fast, efficient, flexible and powerful, facilitating an event-driven integration to arbitrary network applications. And Hangfire “fire and forget” design facilitates this process, so that client applications can focus on what they need to do, while consuming network services to play as a respected network application citizen within the overall network application ecosystem framework. All this is great, but I was not able to use Hangfire statistics to create an optimal cost-base constraint Transaction Distribution algorithm, since I needed more granularity to calculate next-best server to assign a Background Transaction Job.
Reading REDIS: To achieve this, I needed to read and parse REDIS OS X keys and values from the Windows VM Application Client at transaction submission time. As more servers lit up around the network, new key combination behaviours surfaced, and with that code changes required to consume a better understanding of how keys are used to represent job processing state transitions for guaranteed transaction completion. I created a JSon model as the schema of the behavioural processing, which I will provide once I am sure I caught all the combinations that derive the granular statistics I require for a Cost-base Network Server Computability Algorithm.
Challenges Refining the Network Ecosystem Optimal Throughput Computability Algorithm: The algorithm, specific to the network application ecosystem, considers what’s being processed at Transaction submission time on each of the network VM servers, and what is committed (queued) to that server. Note that Enqueued state has not yet selected a transaction processing server on the Hangfire server, where the client application submitted the transaction to an assigned server. It’s not until the enqueued transaction had been queued, that Hangfire provides, through REDIS keys, which server the transaction is mapped to. I need this earlier, since the algorithm is predicting the transaction completion time, given all the dynamics happening on the network eccosystem. I want to avoid making any updates to Hangfire statistis core code, so that I can refresh NUGET on builds to pickup latest Hangfire updates and not break the Transaction Processing Network (TPN) service.
Preserving ‘Fire And Forget’: Recall that the client application submits the transaction to a named job queue which is assigned to a server that has appended the queue name to its server name, ensuring the job is processed by the intended server. Once assigned, then Hangfire will requeue the job if the server fails before the job completes. Since the backgound application uses incremental indexing with checkpoint-restart recovery, the application simply continues where it left off if a server fails. And since job results are stored in Mongoldb in OS X, it’s simply a matter of the background application jumping to the restart-checkpoint to continue its incremental index processing on transaction documents. Thus, Fire and Forget is preserved.
The Devil is In the Details: However, there is quite a bit that needs to be considered for the application to create an optimal cost-based Transaction Network Distribution algorithm, including where the source documents are located (network saturating minimization), VM performance (number of processes, processor type, memory amount and speed, VM host disk performance, and so forth). There is also, due to incremental indexing, the opportunity to dynamically split large jobs into self-partitioned parallel jobs that take advantage of network VMs available, to service transaction partitions and recombine partitions when completed (it will be good when job-depency is added to Hangfire). Furthermore, since VMs auto-register in Hangfire when turned on, and disappear when turned off, the network can self-expand dynamically, adapting to meet service demand on a temporal basis according to cost constraints (also, I’ll need to add the ability to optionally intercept Hangfire jobs if the VM is dead, to move them to the next-best server to respect global and local optima constraints within the (parallel distributed) Transaction Processing Network (TPN) ecosystem. Thus, an application service layer intercepts and intermediates application transaction submissions, on behalf of client applications, so that optimal Transaction partitioning and distribution across the TPN is determined based on current and committee compute resource consumption.
MRP Global and Local Optima: Eventually, the TPN optimal throughput distribution algorithm will also consider promises such as scheduled calendar repeatable jobs, and corporate priorities according to corporate business plans, when slotting transaction partitions across the TPN for optimal cost-sensitive throughput. In effect, the TPN Optimization Service takes on the posture of a MRP and Operational Planning service for client applications, facilitating both Global Optima (that TPN Corporate objectives are met and overall proprieties respected) and Local Optima (the transaction throughput needs are respected for best turnaround within global optima constraints).
Customized Ecosystem Throughput Optimization: Most of this is customized to the application mix, and most probably is noise to those who do not need to consider such a constraint complex, so I’ll not bore readers with unnecessary detail. I will provide the Json behavioural schema, since I think that probably provides value to others. If one already exists, please feel free to provide to I don’t ramble on with unnecessary minutia.