[link]
This paper presents locally linear embedding (LLE) for nonlinear dimensionality reduction. LLE can learn the structure of the underlying lowdimensional manifold of the sampled data in high dimensional space. Therefore, LLE can preserve the distance in the manifold space much better than PCA. Unlike PCA which projects high dimensional space to low dimensional space with a global linear matrix, LLE seeks locally linear projections for locally linear patches formed by neighboring data points. Using many locally linear projections instead of a global linear projection is the key to nonlinear dimensionality reduction. Technical details Fig. 1 shows the problem if nonlinear dimensionality reduction. ![](http://1.bp.blogspot.com/Vpkju74n9w8/VSX3RGAjKtI/AAAAAAAAA08/NqJA7v1MLuQ/s1600/dimreduct.png) Fig. 2 summarizes the LLE algorithm. The neighbors of each data point can be computed by Knearest neighbor or by collecting the data points within a radius. The weights in step 2 reflect intrinsic geometric properties of the data that are invariant to locally linear projections. The third step finds new data points projected by locally linear projections in lowdimensional space. ![](http://3.bp.blogspot.com/kVz0nYKKSQ0/VSX3_Aio4lI/AAAAAAAAA1E/hrltFFQ7njY/s1600/LLEgraph.png) ![](http://3.bp.blogspot.com/bfgIDOavP5s/VSX23Jko4tI/AAAAAAAAA0s/IsE4Th0SN1Y/s1600/LLE.png) $\varepsilon (W) = \displaystyle\sum\_i  \vec{X} = \sum\_j W\_{ij} \vec{X}\_j^2$ $\Phi (Y) = \displaystyle\sum\_i  \vec{Y} = \sum\_j W\_{ij} \vec{T}\_j^2$ Results Figure 4 shows the results of dimensional reduction of images of lips using CFA and LLE. ![](http://3.bp.blogspot.com/STDK5O9TBX4/VSX4TyurcvI/AAAAAAAAA1M/XUoZCmnObQ/s1600/lips.png)
Your comment:
