Abstract
The emergence of low-cost sensing architectures for diverse modalities has made it possible to deploy sensor networks that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these networks acquire large amounts of very high-dimensional data. For example, even a relatively small network of cameras can generate massive amounts of high-dimensional image and video data. One way to cope with this data deluge is to exploit low-dimensional data models. Manifold models provide a particularly powerful theoretical and algorithmic framework for capturing the structure of data governed by a small number of parameters, as is often the case in a sensor network. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that joint manifold structure can lead to improved performance for a variety of signal processing algorithms for applications including classification and manifold learning. Additionally, recent results concerning random projections of manifolds enable us to formulate a scalable and universal dimensionality reduction scheme that efficiently fuses the data from all sensors.
Original language | English (US) |
---|---|
Pages (from-to) | 2580-2594 |
Number of pages | 15 |
Journal | IEEE Transactions on Image Processing |
Volume | 19 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2010 |
Keywords
- Camera networks
- Classification
- Data fusion
- Manifold learning
- Random projections
- Sensor networks
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design