Converting a negative loss term to inverse

I'm training a classifier using this loss function: $$ \mathcal{L} = \mathcal{L}_{CE} - \lambda_1 \mathcal{L}_{push} +\lambda_2 \mathcal{L}_{pull} $$

I need to maximize a certain value using $\mathcal{L}_{push}$ and that's why it has a negative coefficient. The problem is while I'm training the model the loss value became negative and I keep getting random accuracy results. I tried changing $- \lambda_1 \mathcal{L}_{push}$ to $\lambda_1 \frac{1}{ \mathcal{L}_{push}}$ to get numeric stability and results are not bad anymore. The thing is I'm not sure if I'm doing the right thing by inversting a loss term and if it's still going to maximize $\mathcal{L}_{push}$ in every possible scenario?

Topic training loss-function classification

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.