Since the samples in consecutive iterations are correlated, it may take some iterations for the sample mean to achieve a low error. The slower the mixing, the more samples are needed. Mixing rate for MCMC approaches is generally measured by effective sample size (ESS), which measures the number of independent samples needed to obtain the equivalent estimation error . Thus, we can see the change in ESS in Figure 5.12 by drawing horizontal lines. Note that the error continues decreasing even after 100,000 full iterations, a strong indication that the proposed MaxEnt solution is a very good (if not exact) approximation to the asymptotic mean of MCMC-UMS. But, it is surprising to see that the random directions method is much slower.
To investigate this, we tried the systematic approach after randomizing the orthogonal basis matrix , by post-multiplying it by an orthogonalized random matrix of dimension . Surprisingly, the performance then approximately matched that of random directions. It appears, then, that the advantage of the systematic method comes not from the systematic way that the basis functions are used, but has to do with the fact that the basis functions, as they are calculated by SVD or QR decomposition of matrix , are optimally aligned. Computing from using either SVD and QR had the same effect. So, the same orthogonal subspace is spanned by the randomized matrix but has much slower mixing. To understand this effect, think of a very long, thin convex region. If we start at one end of the region, and use a random walk, we are likely to remain at the same end of the region for a long time since movement precisely along the main axis is rare. However, if we walk systematically, first perpendicular to the main axis, then parallel to the main axis, we can quickly cover the region. Why this optimal alignment happens and whether it is problem-specific is not clear.