Encoding correlation
I have rather theory-based question as I'm not that experienced in encoders, embeddings etc. Scientifically I'm mostly oriented around novel evolutionary model-based methods.
Let's assume we have data set with highly correlated attributes. Usually encoders are trained to learn representation in lesser number of dimensions. What I'm wondering about is quite the opposite. Would it be possible to learn encoding to higher number of dimensions but less correlated (wishfully non-correlated)? The idea is to turn less-dimensional, very tough problem to high-dimensional but easier one. Kinda unwrap those intricate correlations using NN and decode solutions later.
Edit 1 Of course we assume we know correlation mapping really good. How exactly could I use correlation mapping to unwrap it? Is it fundamentally possible to unmap attribute dependencies?
Topic encoder mathematics theory statistics
Category Data Science