Algorithms for Non-negative Matrix FactorizationAlgorithms for Non-negative Matrix FactorizationLee, Daniel D. and Seung, H. Sebastian2000
Paper summaryyongzhuangNMF aims to find two matrices W and H, such that V=W H.There are two cost functions as follows:
Least-squares error: $||V – WH||^2$
Divergence: $D(V || WH)$
However, when we try to optimize the the functions $||V – WH||^2$ and $D(V || WH)$. They are convex in W only or H only, and they are not convex in both variables together.
To solve this problem, the authors build a new update rules called "multiplicative versus additive update rules" and $||V – WH||^2$ and $D(V || WH)$ are non-increasing under the update rules.
Multiplicative versus additive update rules: $H_{\alpha \mu}\leftarrow H_{\alpha \mu} + \eta_{\alpha \mu}\bigg[\sum\limits_{i} W_{i\alpha}\frac{V_{i\mu}}{(WH)_{i\mu}}-\sum\limits_{i} W_{i\alpha}\bigg]$
We want to find two matrices $W$ and $H$ such that $V = WH$. Often a goal is to determine underlying patterns in the relationships between the concepts represented by each row and column. $W$ is some $m$ by $n$ matrix and we want the inner dimension of the factorization to be $r$. So
$$\underbrace{V}_{m \times n} = \underbrace{W}_{m \times r} \underbrace{H}_{r \times n}$$
Let's consider an example matrix where of three customers (as rows) are associated with three movies (the columns) by a rating value.
$$
V = \left[\begin{array}{c c c}
5 & 4 & 1 \\\\
4 & 5 & 1 \\\\
2 & 1 & 5
\end{array}\right]
$$
We can decompose this into two matrices with $r = 1$. First lets do this without any non-negative constraint using an SVD reshaping matrices based on removing eigenvalues:
$$
W = \left[\begin{array}{c c c}
-0.656 \\\
-0.652 \\\
-0.379
\end{array}\right],
H = \left[\begin{array}{c c c}
-6.48 & -6.26 & -3.20\\\\
\end{array}\right]
$$
We can also decompose this into two matrices with $r = 1$ subject to the constraint that $w_{ij} \ge 0$ and $h_{ij} \ge 0$. (Note: this is only possible when $v_{ij} \ge 0$):
$$
W = \left[\begin{array}{c c c}
0.388 \\\\
0.386 \\\\
0.224
\end{array}\right],
H = \left[\begin{array}{c c c}
11.22 & 10.57 & 5.41 \\\\
\end{array}\right]
$$
Both of these $r=1$ factorizations reconstruct matrix $V$ with the same error.
$$
V \approx WH = \left[\begin{array}{c c c}
4.36 & 4.11 & 2.10 \\\
4.33 & 4.08 & 2.09 \\\
2.52 & 2.37 & 1.21 \\\
\end{array}\right]
$$
If they both yield the same reconstruction error then why is a non-negativity constraint useful? We can see above that it is easy to observe patterns in both factorizations such as similar customers and similar movies. `TODO: motivate why NMF is better`
#### Paper Contribution
This paper discusses two approaches for iteratively creating a non-negative $W$ and $H$ based on random initial matrices. The paper discusses a multiplicative update rule where the elements of $W$ and $H$ are iteratively transformed by scaling each value such that error is not increased.
The multiplicative approach is discussed in contrast to an additive gradient decent based approach where small corrections are iteratively applied. The multiplicative approach can be reduced to this by setting the learning rate ($\eta$) to a ratio that represents the magnitude of the element in $H$ to the scaling factor of $W$ on $H$.
### Still a draft