Class-Specific Feature Mixture (CSFM)

To get around the one class, one feature assumption, CSFM assumes that each class $ H_k$ is composed of $ L$ sub-classes represented by an additive mixture PDF. We assume that class $ H_k$ is composed of $ L$ subclasses $ H_{k,l},
\; 1\leq l \leq L$, that have relative probabilities of occurrence $ \alpha_{k,l}$ and individual sub-class PDFs $ p({\bf x}\vert H_{k,l})$. The mixture PDF for $ H_k$ is given by:

$\displaystyle p({\bf x}\vert H_k) = \sum_{l=1}^L \; \alpha_{k,l} \; p({\bf x}\vert H_{k,l}),$ (12.5)

where $ \sum_{l=1}^L \; \alpha_{k,l}=1.$ If we assume that each sub-class has a different feature (approximate sufficient statistic) to distinguish it from a sub-class dependent reference hypothesis $ H_{0,l}$, we apply (2.2) to get

$\displaystyle p({\bf x}\vert H_{k}) = \sum_{l=1}^L \; \alpha_{k,l} \; \frac{p({\bf x}\vert H_{0,l})}{p({\bf z}_l\vert H_{0,l})} p({\bf z}_l\vert H_{k,l})$ (12.6)

Note that each class PDF is represented by the same library of $ L$ models. The CSFM classifier is

$\displaystyle \hat{k}=\arg \max_{k=1}^M \; \left\{ \sum_{l=1}^L \; \alpha_{k,l}...
...l})}{p({\bf z}_l\vert H_{0,l})} p({\bf z}_l\vert H_{k,l}) \right\} \; p(H_{k}),$ (12.7)

which may be interpreted as a data-specific feature classifier because for each data sample $ {\bf x}$, the factor $ p({\bf x}\vert H_{0,l})/p({\bf z}_l\vert H_{0,l})$ has a dominant effect, effectively picking one feature to classfy the sample.

Baggenstoss 2017-05-19