Why we can’t relate to eigenfaces
Traditional methods like Principal Component Analysis (PCA) would decompose a dataset into some form of latent representation e.g. eigenvectors, which at times can be meaningless when visualised — what actually is my first principal component? Non-Negative Matrix Factorisation (NNMF) was a method developed in 1996 by Lee and Seung that showed data could also be deconstructed (i.e. a set of facial portraits) into parts and extract features like the nose, eyes, and a smile.
The differences between PCA and NNMF arise from a constraint in their derivation: namely, NNMF does not allow the weights of different features to be negative. On the other hand, PCA approximates each image as a linear combination of all images or basis. Although eigenfaces are directions of largest variance, they are hard to interpret: what actually is the first principal component showing me?
Eigenfaces are meaningless directions of largest variance
A linear combinations of distributed components can feel difficult to relate to when attempting to visualise the data. NNMF is interesting because it can extract parts from the data that when we visualise it, we can actually relate to it.
In the seminal paper by Lee and Seung, they take a series of facial portraits and by running the NNMF algorithm, they decompose the images to extract the following features:
We can see (moving from the top left to the bottom right) that each feature kind of looks like a different facial part. We have the eyes, the jaw structure, the nose, the T zone, the eye brows all come out. Compare this to the figure at the top of this article, and you’ll see how differently the two models explain the structure of the same data.
Likewise, this method can also applied to articles of text where you can decompose each article into a topic:
On the right we have our data matrix M, which is decomposed (in the same manner as the image example) into the topics matrix A and the weights matrix W. (Note: reference  will help you remove stop words)
To appreciate how this simple example can give such high results, let’s first go over the Mathematics:
Mathematics of Non-Negative Matrix Factorisation
The goal of NNMF is to decompose an image database (matrix V) into two smaller matrices W and H with the added constraint that W>0 and H>0 :
Let V be our matrix of data. In the case of images, you would vectorise each image and concatenate them sideways to have something that’s (N x M) with N total pixels and M total images. The iterative procedure used to derive an approximation for V is as follows:
We initialise W and H randomly and as follows:
- Normalise the H Matrix
- Incorporate this normalised H Matrix to W
- Normalise the W matrix
- Calculate the improvement in the following objective function (if improvement is less than a threshold, go back to 1):
Unlike with PCA, NNMF has no closed form solution so this iterative method (similar to gradient descent) keeps working till the improvements are marginal. Note — code is at the end.
In reality the objective function or the methodology provided are not important here. The the observant will notice that the non-negativity constraint implemented is that characterises the difference between PCA and NNMF and can result in an approachable latent representation i.e. parts by learning. In NNMF, every latent feature is added (given weights) to each other to recreate each part of sample data. Many of these weights can be 0.
With PCA on the other hand, we can compare it in a NNMF framework in that it constraints the the columns of W to be orthonormal and the rows of H to be orthogonal. This separation allows for a distributed representation but not an additive representation, thus the difference in the latent representation between the two models.
The model you use makes a huge difference in its interpretability, its usefulness and the inference you draw from it. I’m a proponent of using various methods as benchmark because without measuring performance in connection with some form a null hypothesis, you can’t really monitor effective performance.
Given that, each model also has it’s marginal benefits so it also makes sense to try learn something from each model. In the above case, simply changing the negativity constraint can give the user an entirely different result. I encourage the reader to take the below code away and see what other features they can come out with!
- DD Lee, HS Seung. 1999 “Learning the parts of objects by non-negative matrix factorization”
- Dempster, A. P., Laired, N. M. & Rubin, D. B. (1977) Maximum likelihood from incomplete data via the EM algorithm.
- Reference for list of common (“Stop”) words, referenced from: http://xpo6.com/list-of-englishstop-words/
Code for NNMF
The coding below is in Matlab (don’t ask…) but it should be easy enough to translate to any language.
stopconv=40; % Stopping criterion (can be adjusted)
niter = 1000; % maximum number of iterations (can be adjusted)
% initialize random w and h
% This is the update equation for H as per the Nature paper, with
% This is the update equation for W as per the Nature paper with
% test convergence every 10 iterations
% adjust small values to avoid undeflow
% construct connectivity matrix
[~,index]=max(H,,1); %find largest factor
mat1=repmat(index,m,1); % spread index down
mat2=repmat(index',1,m); % spread index right
if(sum(sum(cons~=consold))==0) % connectivity matrix has not changed
inc=inc+1; %accumulate count
inc=0; % else restart count
% prints number of changing elements
break, % assume convergence is connectivity stops changing