The bulk of current work in statistical inference takes the discriminative
viewpoint, wherein we seek to directly estimate the a posteriori
probabilities
. Included in these methods are neural networks (NN),
which includes deep learning, and support vector machines (SVM).
The generative viewpoint seeks to estimate
indirectly by first estimating the
probability distribution of under the
relevant hypothesis , denoted by
,
then applying Bayes rule:
This problem is held to be futile for highdimensional
by widespread consensus in the field [1].
While discriminative methods are having their moment
in the sunshine, we have barely scratched
the surface in exploring the potential of generative methods.
As we will see, generative models can be constructed using multiple
features, and we may even build generative models
for each class assumption using its own feature.
In effect, PDF projection allows
incorporating the feature transformation into the classifier.
In Figure 1.1, we illustrate the
concept that the theoretical framework allows the decision process
to be a function of multiple features.
Figure 1.1:
Illustration of how feature extraction
is incorporated into a classifier design by
PDF projection.

Baggenstoss
20170519