Discriminative vs. Generative Methods in Inference

The bulk of current work in statistical inference takes the discriminative viewpoint, wherein we seek to directly estimate the a posteriori probabilities $ p(H\vert{\bf x})$. Included in these methods are neural networks (NN), which includes deep learning, and support vector machines (SVM). The generative viewpoint seeks to estimate $ p(H\vert{\bf x})$ indirectly by first estimating the probability distribution of $ {\bf x}$ under the relevant hypothesis $ H$, denoted by $ p({\bf x}\vert H)$, then applying Bayes rule:

$\displaystyle p(H\vert{\bf x})\sim p({\bf x}\vert H)\; p(H).$

This problem is held to be futile for high-dimensional $ {\bf x}$ by widespread consensus in the field [1]. While discriminative methods are having their moment in the sunshine, we have barely scratched the surface in exploring the potential of generative methods.

As we will see, generative models can be constructed using multiple features, and we may even build generative models for each class assumption using its own feature. In effect, PDF projection allows incorporating the feature transformation into the classifier. In Figure 1.1, we illustrate the concept that the theoretical framework allows the decision process to be a function of multiple features.

Figure 1.1: Illustration of how feature extraction is incorporated into a classifier design by PDF projection.
\includegraphics[width=5.0in]{gendis.eps}

Baggenstoss 2017-05-19