Nonlinear Dimensionality Reduction by Locally Linear Embedding Nonlinear Dimensionality Reduction by Locally Linear Embedding
Paper summary This paper presents locally linear embedding (LLE) for nonlinear dimensionality reduction. LLE can learn the structure of the underlying low-dimensional manifold of the sampled data in high dimensional space. Therefore, LLE can preserve the distance in the manifold space much better than PCA. Unlike PCA which projects high dimensional space to low dimensional space with a global linear matrix, LLE seeks locally linear projections for locally linear patches formed by neighboring data points. Using many locally linear projections instead of a global linear projection is the key to nonlinear dimensionality reduction. Technical details Fig. 1 shows the problem if nonlinear dimensionality reduction. ![]( Fig. 2 summarizes the LLE algorithm. The neighbors of each data point can be computed by K-nearest neighbor or by collecting the data points within a radius. The weights in step 2 reflect intrinsic geometric properties of the data that are invariant to locally linear projections. The third step finds new data points projected by locally linear projections in low-dimensional space. ![]( ![]( $\varepsilon (W) = \displaystyle\sum\_i | \vec{X} = \sum\_j W\_{ij} \vec{X}\_j|^2$ $\Phi (Y) = \displaystyle\sum\_i | \vec{Y} = \sum\_j W\_{ij} \vec{T}\_j|^2$ Results Figure 4 shows the results of dimensional reduction of images of lips using CFA and LLE. ![](
Nonlinear Dimensionality Reduction by Locally Linear Embedding
Roweis, Sam T. and Saul, Lawrence K.
Science - 2000 via Local Bibsonomy
Keywords: visualization, dimensionality_reduction, dipl_literatur, unsupervised, nldr, ml

Summary by Evan Su 5 years ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and