Just an FYI for what I had to do to work around this issue that may be helpful for you and others....
According to odinserj's comment in this topic the only real way to prevent duplicate processing is to implement logic that can somehow check that the job about to be en-queued has not already been processed by a worker.
For my case, I added a "processed = true" attribute to the data that is stored for each job processed. This attribute is checked prior to each en-queue call for a given job in order to prevent multiple workers from processing the same job multiple times.
In my opinion, this type of scenario should be built into the hangfire engine but it appears this is either not possible to implement or is just a bug that isn't being addressed. Either way, I was able to prevent this issue from happening again with the logic above built in.