Why are observation probabilities modelled as Gaussian distributions in HMM?
HMM is a statistical model with unobserved (i.e. hidden) states used for recognition algorithms (speech, handwriting, gesture, ...). What distinguishes DHMM form CHMM is the transition probability matrix P with elements. In CHMM, state space of hidden variable is discrete and observation probabilities are modelled as Gaussian distributions.
- Why are observation probabilities modelled as Gaussian distributions in CHMM?
- Why are they (best)distributions for recognition systems in HMM?
Topic markov-hidden-model speech-to-text gaussian python machine-learning
Category Data Science