Ideas to enforce uniformity of error in linear models

I am looking for ideas to not only solve the least square problem, but to enforce errors to be roughly similar. One idea I had is to add the variance of errors in the classical Ordinary Least Square problem.

My criterion with respect to matrix A, x and y being vectors, would be as follow: $$ J(A) = \mu_e + \lambda\sigma_e $$ where $$ \mu_e = ||Ax-y||²=\sum{e_i}=\sum||Ax_i - y_i||² $$ and $$ \sigma_e = \sum (e_i - \mu_e)² $$

A problem that arise here is there would be a term in power of 4 of A, which seems overly complicated. Any idea on techniques that tackle such problems? I am also open to other ideas to make errors lie in a similar range.

Thanks

Topic linear-models variance optimization machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.