How to combine the loss in a multitask neural network?

when we train a model to learn two tasks at the same time (Multitask learning), we get losses from both tasks in the neural network and then we combine them.

I've seen several works where they've done that, but I don't see a consistent way to achieve it among them.

If I have the tasks A and B, I have seen the following ways to have the total loss:

total_loss = A_loss + B_loss
# or
total_loss = (A_loss + B_loss)/2
# or
total_loss = A_loss*A_coef + B_loss*B_coef

What are the differences between these ways? I think the last one is when I want to give one of the tasks more importance than the other, but I am not sure.

Topic machine-learning-model loss-function multitask-learning neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.