High dimensional data stream summarization and processing

Can anyone recommend a method for summarizing and processing high dimensional data streams efficiently and effectively for anomaly detection?

In fact, I investigated the different methods for data stream summarization (sampling, histograms, sketches, wavelets, sliding windows) and am confused about the choice. In fact, I noticed that sampling and sliding windows are general purpose and keep the raw data, while the others are task-specific and make transformations to the data.

I am interested in the first case but it may be more consuming.

Topic anomaly-detection dimensionality-reduction data-stream-mining data-mining machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.