Autoencoder for Extremely Sparse Data

I am attempting to train an autoencoder on data that is extremely sparse. Each datapoint is only zeros and ones and contains ~3% 1s. Being that the data is mostly zero the autoencoder learns to guess zero every time. Is there a way to prevent this from happening? To give context this is extremely sparse data when you consider that the number of features is over 865,000

Topic sparse pytorch autoencoder machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.