### Abstract

We propose a novel method for linear dimensionality reduction of manifold modeled data. First, we show that with a small number M of random projections of sample points in ℝ ^{N} belonging to an unknown K-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy. Second, we rigorously prove that using only this set of random projections, we can estimate the structure of the underlying manifold. In both cases, the number of random projections required is linear in K and logarithmic in N, meaning that K < M ≪ N. To handle practical situations, we develop a greedy algorithm to estimate the smallest size of the projection space required to perform manifold learning. Our method is particularly relevant in distributed sensing systems and leads to significant potential savings in data acquisition, storage and transmission costs.

Original language | English (US) |
---|---|

Title of host publication | Advances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference |

State | Published - 2009 |

Event | 21st Annual Conference on Neural Information Processing Systems, NIPS 2007 - Vancouver, BC, Canada Duration: Dec 3 2007 → Dec 6 2007 |

### Publication series

Name | Advances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference |
---|

### Other

Other | 21st Annual Conference on Neural Information Processing Systems, NIPS 2007 |
---|---|

Country | Canada |

City | Vancouver, BC |

Period | 12/3/07 → 12/6/07 |

### ASJC Scopus subject areas

- Information Systems

## Fingerprint Dive into the research topics of 'Random projections for manifold learning'. Together they form a unique fingerprint.

## Cite this

*Advances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference*(Advances in Neural Information Processing Systems 20 - Proceedings of the 2007 Conference).