Subtraction of two variances (scores)
I was wondering, would it be correct to say, when we treat two variances of two populations as a random variable itself (or as a score), that we can simply get a resultant variance V_subtract = V_pop1 - V_pop2 (e.g. V_subtract = (1-0.5) = 0.5. If so, I am wondering what that says about the actual standard error in terms of this subtracted variance score, if we know the total sample size of population 1 and population 2 respectively which may differ, in how the ultimate variance measure was computed? (I do know there are standard errors of the variance formulas given as, for example sigma^2 = sample variance * (sqrt(2/(n-1).)
Down the line it would be ideal to normalize by the standard error to get a z-score, if not immediately, in a further subtraction between an umbrella group (say population A has populations 1 and 2, and population B has populations 3 and 4, and we initially compute subtracted variance 1 and 2, and subtracted variances 3 and 4; then variance A - variance B).
I am just kind of uncertain if we would normalize by the standard error twice [and if so, how] (once per subtraction: first subtract variance A = variance 1 - variance 2 (and variance B = variance 3 - variance 4), or just once at the very end of the ultimate subtraction (population variance A - population variance B).
Currently the thinking is if it's only a normalization by the standard error once, then it would just be based on sqrt(standard error pop A^2 + standard error pop B^2), where standard error of either population is of the entire population (combined to make variance 1 and 2, as well as variance 3 and 4 respectively).
Any help would be greatly appreciated.
Topic variance estimators statistics
Category Data Science