The role of HMM's in class-specific classifiers

In classifying signals, The hidden Markov model (HMM) has a major advantage but one serious drawback. The advantages is that complex processes may be modeled using low-dimensional models, thereby allowing the HMM to be trained using a realizable amount of data. The low dimension is achieved by dividing (segmenting) the data into small time steps from which low-dimensional measurements are made. Although the total observation space is large (the number of steps times the dimension of the observations), the dimension of the observations may be kept low.

But the problem with HMM's is that they need to be carefully tailored for a specific type of random process. Not only is the segment size chosen specially, but so is the observation space (the feature set). It is difficult for an HMM designed for speech recognition to operate well for other types of processes except speech. If separate HMM's are used, the likelihood values cannot be directly compared in a classifier. The class-specific method solves this problem by allowing two or more HMM's to be used as detectors for their respective model class, while solving the problem of comparing the outputs optimally.