reweighting to solve large/infinite variance issues (spikes)
I came across a model that seems to suffer from rare events due to configurations with almost vanishing weight but large contributions to the observables (energy). The is a proposed rather generic algorithm (https://journals.aps.org/pre/pdf/10.1103/PhysRevE.93.033303) in which one essentially samples 1 additional time slice, that is then ignored during the measurements. The idea is that tracing out the auxiliary fields on one time-slice (the one you are measuring on) guarantees a finite weight p(X) = sum_x p(X,x), given p(X,x) is non-negative, i.e. no sign-problem. One can also view this procedure as a reweighting scheme with <exp(-dtau H)>. I think this can be easily implemented for equal-time observables, potentially even on the Hamiltonian side with a bit of knowledge about the core routines. The time-displaced routines seem to be a bit trickier on first sight and might require some small changes in the core routines (I'm not sure yet).