Are cost functions typically normalized?
I'm very new to writing cost functions for optimization and I have what may be a basic question or just a misinterpretation.
I have multiple cost functions that I'd like to add up into one total cost function. Here is a simplified example:
Say I want to maximize the bounciness $b$ of a bouncy ball while minimizing its weight $w$. The weight value goes into a function that computes the bounciness but we don't know what the function looks like.
If I define two different cost functions as follows:
$ C_1 = \frac{1}{b} $
$ C_2 = e^{w} $
And I define a total cost function as a function of both of them like:
$ C_{tot} = C_1 + C_2 $
Let's say $C_1$ varies wildly between values on the order of 10,000 and 1,000,000 while $C_2$ tends to vary between orders of 0.1 and 10 depending on the input $w$.
I want to minimize the cost of course. Is it typical to normalize these two cost functions so that they are weighted closely in the total cost function? For example so they only vary between values of 0 and 1.
Otherwise I can see a minimization algorithm glancing over variations in $C_2$ for the much larger changes found in $C_1$.
Are the weights in the cost function typically determined through trial and error or is there a straight forward method to determine what they should be?
Topic cost-function
Category Data Science