Covariance matrix

To run the MaxEnt correctly, we need the covariance matrix. No problem you would say, let's cumulate enough bins such and then determine it during the post-processing. This is in principle OK, but since you need at least 2*Ltrot bins this will become very expensive disc wise. Also all our analysis is cache limited since we read in all the bins. Hence the suggestion to define a new flag

Running_average ==  true/false 

in the observable type, that would monitor if we keep a running average and covariance or if we cumulate bins. Cumulating bins would be the default and would be backward compatible. If the running average is true, then for a scalar observable we would store

# of  bins,    <sign * O >,    <sign>,   <(sign * O)**2>,    < sign **2 > 

each time we get a new bin, the we update the file. In this way, the size of the file never grows. This is of course not the ideal since we a-priori do not know what the autocorrelation time is. I would also have to think about how this method would combine with a jackknife approach.