Abstract
In this paper, we build a framework for the analysis and classification of collective behavior using methods from generative modeling and nonlinear manifold learning. We represent an animal group with a set of finite-sized particles and vary known features of the group structure and motion via a class of generative models to position each particle on a two-dimensional plane. Particle positions are then mapped onto training images that are processed to emphasize the features of interest and match attainable far-field videos of real animal groups. The training images serve as templates of recognizable patterns of collective behavior and are compactly represented in a low-dimensional space called embedding manifold. Two mappings from the manifold are derived: the manifold-to-image mapping serves to reconstruct new and unseen images of the group and the manifold-to-feature mapping allows frame-by-frame classification of raw video. We validate the combined framework on datasets of growing level of complexity. Specifically, we classify artificial images from the generative model, interacting self-propelled particle model, and raw overhead videos of schooling fish obtained from the literature.
Original language | English (US) |
---|---|
Pages (from-to) | 185-199 |
Number of pages | 15 |
Journal | Journal of Theoretical Biology |
Volume | 336 |
DOIs | |
State | Published - Nov 7 2013 |
Keywords
- Classification
- Collective motion
- Fish schooling
- Generative modeling
- Isomap
ASJC Scopus subject areas
- Statistics and Probability
- Modeling and Simulation
- Biochemistry, Genetics and Molecular Biology(all)
- Immunology and Microbiology(all)
- Agricultural and Biological Sciences(all)
- Applied Mathematics