To understand the difficulty of the inversion problem, we inverted the features using linear and quadratic programming (LP and QP) solvers, which use different regularization measures. In the LP solver, we solved for the positive-valued vector $ {\bf x}$ that met (5.12), and minimized $ t_1={\bf x}^\prime {\bf 1}$ (sum of the elements of $ {\bf x}$). We imposed a lower bound of $ 4\times 10^{-5}$ to prevent zero-valued elements. Because $ t_1$ is fixed by constraint (5.12), the minimization had no effect, but the result provides us at least with a valid starting point. For the QP solver, we solved for the positive-valued vector $ {\bf x}$ that met (5.12), and minimized $ t_2={\bf x}^\prime {\bf x}$ (sum of the squared elements of $ {\bf x}$). Both solutions are shown in Figure 5.10 as well as the original raw spectrum. The LP solution has only three spectral values that exceed the lower bound! This solution might be useful if sparseness is desired, but is clearly a bad spectral estimate. The QP solution is more reasonable, but reaches the lower bound for more than half of the spectrum. Neither are good approximations to the original raw spectrum. It is interesting to note that despite the wide differences in how all these spectral estimates look, all of the spectra are on the manifold $ {\cal M}({\bf z}^*)$. This illustrates the difficulty of feature inversion! Clearly, meeting the constraint (5.12) is not enough. See software/test_linexp.m.
Figure 5.10: Linear and quadratic programming results. Light jagged line: original raw spectrum. Circles: LP solution, Dots: QP solution.

Sample spectra were generated using MCMC-UMS using the systematic approach. We also computed the MaxEnt solution $ \hat{\mbox{\boldmath $\lambda$}}({\bf z}^*)$ by solving (5.15). Results are shown in Figure 5.11. The graph shows five separate spectra:

  1. Input spectrum $ {\bf x}$: light jagged line,
  2. Typical MCMC-UMS sample : dark jagged dashed line.
  3. Average of 100000 full MCMC-UMS iterations (systematic approach): circles. Each full iteration is $ N-D=62$ simple iterations, updating each of the free manifold dimensions.
  4. Autoregressive spectrum obtained by Levinson algorithm (traditional AR spectral estimate) : triangles.
  5. MaxEnt solution $ \hat{\mbox{\boldmath $\lambda$}}({\bf z}^*)$, solving (5.15) : smooth dark solid line that runs through circles and triangles.
Once again, all of the spectra are on the manifold $ {\cal M}({\bf z}^*)$ 5.1. The important take-away from Figure 5.11 is that $ \hat{\mbox{\boldmath $\lambda$}}({\bf z}^*)$ runs directly through the MCMC-UMS sample mean values (circles), confirming the reasoning in section 5.3.2 for approximating the asymptotic mean of MCMC-UMS. The fact that $ \hat{\mbox{\boldmath $\lambda$}}({\bf z}^*)$ also appears to be the same as the traditional AR spectrum (aside for a slight deviation at the end bins), is not unexpected because it involves the same optimization as for traditional MaxEnt spectral estimation, which is known to coincide with the auto-regressive method [34,35]. Despite the fact that $ \hat{\mbox{\boldmath $\lambda$}}({\bf z}^*)$ corresponds to existing methods in this case, it is enlightening to see a new interpretation in terms of sampling uniformly (with maximum entropy) on the manifold. It also leaves open the possibility that the sampling viewpoint might be more general - something we confirm later. See software/test_linexp.m.
Figure 5.11: Combined spectral graph with five spectral estimates (see text for key).

Baggenstoss 2017-05-19