hopsy.ess#
- class hopsy.ess(data, series=0, method='bulk', relative=False, prob=None, dask_kwargs=None)#
Calculate estimate of the effective sample size (ess).
- Parameters:
data (numpy.ndarray) – MCMC samples with
data.shape == (n_chains, n_draws, dim)
.series (int) – Compute a series of effective sample sizes every
series
samples, so ess will be computed fordata[:,:n] for n in range(series, n_draws+1, series)
. For the default valueseries==0
, ess will be computed only once for the whole data.method (str) –
Select ess method. Valid methods are:
”bulk”
”tail” # prob, optional
”quantile” # prob
”mean” (old ess)
”sd”
”median”
”mad” (mean absolute deviance)
”z_scale”
”folded”
”identity”
”local”
relative (bool) – Return relative ess ress = ess / n
prob (float, or tuple of two floats, optional) – probability value for “tail”, “quantile” or “local” ess functions.
n_procs (int = 1) – In combination with “series”: compute series of ess in parallel using
n_procs
subprocesses.dask_kwargs (dict, optional) – Dask related kwargs passed to
wrap_xarray_ufunc()
.
- Returns:
Return the effective sample size,
- Return type:
numpy.ndarray
Notes
The basic ess (
) diagnostic is computed by:where
is the number of chains, the number of draws, is the estimated _autocorrelation at lag , and is the last integer for which is still positive.The current implementation is similar to Stan, which uses Geyer’s initial monotone sequence criterion (Geyer, 1992; Geyer, 2011).
References
Vehtari et al. (2019) see https://arxiv.org/abs/1903.08008
https://arviz-devs.github.io/arviz/api/generated/arviz.ess.html
https://mc-stan.org/docs/2_18/reference-manual/effective-sample-size-section.html Section 15.4.2
Gelman et al. BDA (2014) Formula 11.8