Algorithm description

The basic idea of MCMC-UMS is find one valid sample on the manifold, then choose a direction to move along a 1-dimensional line in the linear subspace spanned by the columns of matrix B. If the manifold set is convex (it is in our case), the valid points on the line are a compact interval, the endpoints of which can be easily solved for, and a new sample is chosen by uniformly sampling on the interval. The process repeats by choosing a new direction, and repeating. We consider two ways of choosing directions:
  1. the random-directions approach in which a new random direction is chosen each time. The random direction algorithm, also called hit-and-run is suggested by Smith [35,37,36]. The random direction is chosen by choosing a point uniformly distributed on a hypersphere (by choosing a vector of independent Gaussian random variables and normalizing it to have length 1). Smith makes the claim in 2011 [35]: “This version of hit-and-run is considered to be the most efficient algorithm for generating an asymptotic uniform point if the set under consideration is convex".

  2. the systematic way of moving along the axes, by simply changing each element of ${\bf u}$ one at a time. We see our systematic approach as a simplification of slice sampling in multiple dimensions [38], in which the target distribution is uniform. Indeed, in slice sampling, the uniform samples are drawn within an “axis-aligned" hyper-rectangle.
Here is a summary of the MCMC-UMS algorithm.
  1. As a starting point, we will need a vector ${\bf u}$ that is valid, i.e. produces an ${\bf x}$ that lies in ${\cal M}({\bf z}^*)$ and ${\cal P}^N$. Let ${\bf x}^0$ be a starting point, then ${\bf u}^0 = {\bf B}^\prime {\bf x}^0,$ which maps back to ${\bf x}^0$ using (5.10).
  2. We now seek a direction vector ${\bf b}\in {\cal R}^N$, of unit norm. Clearly ${\bf b}$ must lie in the column space of ${\bf B}$ which span the space in which we have freedom to move. In the systematic method, we let ${\bf b}$ be a column of matrix ${\bf B}$. Each iteration, we choose the next column, in order, returning back to the first column after $N-D$ iterations. In the random directions algorithm, we choose ${\bf b}$ to be a random direction within the column space of ${\bf B}$. To do this, we generate an $N-D$-dimensional vector ${\bf h}$ of independent standard normal variates, then normalize so that $\Vert{\bf h}\Vert=1.$ We then let

    $\displaystyle {\bf b}={\bf B} {\bf h}.$

  3. We then create a new ${\bf x}$ as follows:

    $\displaystyle {\bf x}= {\bf x}^0 + \lambda {\bf b},$

    where $\lambda$ must lie in a compact interval of the real line $\lambda \in (\lambda^L, \; \lambda^H),$ that includes zero (to produce the current value ${\bf x}^0$). We can easily find these limits as follows. Define the vector ${\bf c} = [c_{1}, c_{2}, \ldots
c_{N}]$ calculated from ${\bf b}$ as $c_{i} = b_i/x^0_i, \; \; 1\leq i \leq N.$ Then, $\lambda^L$ is -1 times the reciprocal of the largest positive value of ${\bf c}$, and $\lambda^H$ is -1 times the reciprocal of the most negative value of ${\bf c}$.

  4. We select $\lambda$ by drawing a uniform random variable in $(\lambda^L, \; \lambda^H)$. As long as $\lambda$ is sampled uniformly in this range, the resulting (new) value of ${\bf x}$ will be valid.
  5. We then set ${\bf x}^0={\bf x}$, and repeat (go to step 2).
We can stop after a number of iterations and let ${\bf x}$ be our generated sample of $G({\bf x}\vert{\bf z}^*)$. There is naturally a burn-in period required to “forget" initial conditions.