ML approach

Under certain conditions, the J-function is independent of $H_0$ as long as $H_0$ remains within the “region of sufficiency" (ROS) for ${\bf z}$ (See Section 2.3.3). Then, $H_0$ can even “float" with the data as long as it remains in the ROS. The ROS can be spanned by a parametric model $p({\bf x}; {\bf h})$ as long as (a) $p({\bf x};H_0) = p({\bf x}; {\bf h}_0)$ for some parameter ${\bf h}_0$, and (b) ${\bf z}$ is a sufficient statistic for ${\bf h}$.

We can easily meet these conditions using the multi-variate TED distribution (7.5) with $\alpha$$={\bf A} {\bf h}$. Condition (a) is met by ${\bf h}={\bf0}$. Condition (b) is met since (7.5) can be written $p({\bf x}; {\bf h}) = f({\bf z},{\bf h}),$ for some function $f$. It therefore follows that

$\displaystyle \frac{p({\bf x}; H_0)}{p({\bf z}; H_0)} =
\frac{p({\bf x}; {\bf h})}{p({\bf z}; {\bf h})},$ (17.16)

for any ${\bf h}$, where $p({\bf z}; {\bf h})$ is defined as the distribution of ${\bf z}$ when ${\bf x}\sim p({\bf x}; {\bf h})$. Since the ratio (17.16) does not depend on ${\bf h}$, it makes sense to place ${\bf h}$ at the point where $p({\bf z}; {\bf h})$ can be easily evaluated, and that is the point where both $p({\bf x}; {\bf h})$ and $p({\bf z}; {\bf h})$ have their maximum value, that is to say at the maximum likelihood (ML) point

$\displaystyle \hat{{\bf h}} = \arg \max_{{\bf h}} p({\bf x}; {\bf h}).$

At this point, we can apply the central limit theorem to find $p({\bf z}; \hat{{\bf h}})$. The mean is given by

$\displaystyle {\cal E}\{{\bf z}\} = {\bf A}^\prime {\cal E}\{{\bf x}\}
= {\bf A}^\prime \lambda( {\bf A} \hat{{\bf h}}),$

where ${\cal E}\{ \; \}$ is expected value, and $\lambda(\;)$ is the TED mean (7.3). Note that under $p({\bf x}; \omega)$,

$\displaystyle {\cal E}\{x_i^2\} =
\frac{2}{\alpha^2}
\left[
\frac{1-\frac{1}{2} \left( \alpha^2 -2\alpha + 2\right) e^\alpha}{1-e^\alpha}
\right],$

where $\alpha$$={\bf A} {\bf h}$ [97,98]. From this, we can solve for the variance of $x_i$,

$\displaystyle {\rm var}(x_i) = {\cal E}\{x_i^2\}-({\cal E}\{x_i\})^2
= \frac{1}{\alpha^2} - \frac{e^\alpha}{(e^\alpha -1)^2}.$ (17.17)

The covariance of ${\bf z}$ is therefore

$\displaystyle {\rm cov}({\bf z}) = {\bf A}^\prime \Lambda {\bf A},$

where $\Lambda$ is the diagonal matrix with elements (17.17). Finally, then, we apply (17.16) at ${\bf h}=\hat{{\bf h}}$, and $p({\bf x}; H_0)=1$, to get

$\displaystyle \frac{1}{p({\bf z}; H_0)} = \frac{p({\bf x}; \hat{{\bf h}})}
{{\c...
...bf A}^\prime \lambda( {\bf A} \hat{{\bf h}}), {\bf A}^\prime \Lambda {\bf A})},$ (17.18)

where ${\cal N}({\bf z};$   $\mu$$, {\bf R})$ is the Gaussian distribution with mean $\mu$ and covariance ${\bf R}$. This approach can then be compared numerically with the reciprocal of (2.12).