Why are observation probabilities modelled as Gaussian distributions in HMM?

HMM is a statistical model with unobserved (i.e. hidden) states used for recognition algorithms (speech, handwriting, gesture, ...). What distinguishes DHMM form CHMM is the transition probability matrix P with elements. In CHMM, state space of hidden variable is discrete and observation probabilities are modelled as Gaussian distributions.

  1. Why are observation probabilities modelled as Gaussian distributions in CHMM?
  2. Why are they (best)distributions for recognition systems in HMM?

Topic markov-hidden-model speech-to-text gaussian python machine-learning

Category Data Science


Coupled Hidden Markov Models (CHMMM) assume observed probabilities are Gaussians for the same reason many models make that assumption:

  • Observed variables that are the sums of other variables are often distributed as Gaussians, aka Central limit theorem.
  • Gaussians have a clearly defined functional form.
  • Gaussians are well studied and commonly used in other statistical analyses.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.