Differences and similarities between nonnegative PCA and nonnegative matrix factorization

I have seen references in the literature to nonnegative principal component analysis (nPCA) and nonnegative matrix factorization (NMF). I have tried reading the papers on both of them but it is not clear to me what the differences and similarities between them are. By similarity, I mean I am also interested in knowing when the nPCA and NMF method will give the same solution. Can someone clarify this?

Topic matrix-factorisation pca

Category Data Science


I came across this paper by Asteris et. al. entitled "Orthogonal NMF through Subspace Exploration". You can google it as links posted here might break in the future.

The basic idea boils down to if we can construct non-negative principal components of the matrix of interest, we can use these principal components to do an orthogonal NMF. Since the matrix of interest is non-negative (this is what NMF assumes) and the non-negative principal components are non-negative, then by construction the transformed matrix is also non-negative. The paper details how exactly the construction can be achieved and guarantees convergence.

To answer my own question above: There are many ways to do NMF. In the case where the transforming vectors are orthogonal, i.e. Orthogonal NMF, it is identical to Non-negative PCA. Details can be found in the paper cited above.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.