Adaboost - Show that adjusting weights brings error of current iteration to 0.5
I'm trying to solve the following problem but I've gotten sort of stuck.
So for adaboost, $err_t = \frac{\sum_{i=1}^{N}w_i \Pi (h_t(x^{(i)}) \neq t^{(i)})}{\sum_{i=1}^{N}w_i}$
and $\alpha_t = \frac{1}{2}ln(\frac{1-err_t}{err_t})$
Weights for the next iteration are $w_i' = w_i exp(-\alpha_t t^{(i)} h_t(x^{(i)}))$ and this assumes $t$ and $h_t$ takes on a value of either $-1$ or $+1$.
I have to show that the error with respect to the new weights $w_i'$ is $\frac{1}{2}$. i.e., $err_t' = \frac{\sum_{i=1}^{N}w_i' \Pi (h_t(x^{(i)}) \neq t^{(i)})}{\sum_{i=1}^{N}w_i'} = \frac{1}{2}$
i.e., we use the weak learner of iteration t and evaluate it according to the new weights, which will be used to learn the $t+1$-st weak learner.
I simplified it so that $w_i'=w_i \sqrt{\frac{err_t}{1-err_t}}$ if $w_i$ was correctly classified and $w_i'=w_i \sqrt{\frac{1-err_t}{err_t}}$ if $w_i$ was incorrectly classified. I then tried plugging this into the equation for $err_t'=\frac{1}{2}$ and got $\frac{err_t}{1-err_t} \frac{\sum_{i=1}^{N}w_i \Pi (h_t(x^{(i)}) = t^{(i)})}{\sum_{i=1}^{N}w_i \Pi (h_t(x^{(i)}) \neq t^{(i)})} = 1$ but at this point I sort of ran into a dead end and so I'm wondering how one might show the original question.
Thanks for any help!
Topic boosting algorithms machine-learning
Category Data Science