Prove GDA decision boundary is linear

My attempt:
(a) I solved that $a=\ln{\frac{P(X|C_0)P(C_0)}{P(X|C_1)P(C_1)}}$

(b) Here is where I'm running into trouble. I'm plugging the distributions into $\ln{\frac{P(X|C_0)P(C_0)}{P(X|C_1)P(C_1)}}$ and I get $a=\ln{\frac{P(C_0)}{P(C_1)}}+\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1)-\frac{1}{2}(x-\mu_0)^T\Sigma^{-1}(x-\mu_0)$.
I can see that $b=\ln{\frac{P(C_0)}{P(C_1)}}$ and $w^Tx=\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1)-\frac{1}{2}(x-\mu_0)^T\Sigma^{-1}(x-\mu_0)$.
I'm not sure how to simplify $w^Tx$ so that I can solve for $w$. Or is there something that I did wrong?

Topic gaussian discriminant-analysis machine-learning

Category Data Science


If you expand the terms, you can see that the quadratic terms cancel out.

\begin{align} a &= \ln \frac{P(C_0)}{P(C_1)} + \frac12(x - \mu_1)^T\Sigma^{-1}(x - \mu_1) - \frac12(x-\mu_0)^T\Sigma^{-1}(x-\mu_0)\\ &=\ln \frac{P(C_0)}{P(C_1)} + \frac12\left[x^T\Sigma^{-1}x-2x^T\Sigma^{-1}\mu_1+\mu_1^T\Sigma^{-1}\mu_1\right]\\& - \frac12\left[x^T\Sigma^{-1}x-2x^T\Sigma^{-1}\mu_0+\mu_0^T\Sigma^{-1}\mu_0\right]\\ &= (\mu_0-\mu_1)^T\Sigma^{-1}x+\ln \frac{P(C_0)}{P(C_1)} +\frac12\left[\mu_1^T\Sigma^{-1}\mu_1-\mu_0^T\Sigma^{-1}\mu_0\right] \\ \end{align}

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.