TY - JOUR

T1 - Classification and Geometry of General Perceptual Manifolds

AU - Chung, Sueyeon

AU - Lee, Daniel D.

AU - Sompolinsky, Haim

N1 - Funding Information:
We would like to thank Uri Cohen, Ryan Adams, Leslie Valiant, David Cox, Jim DiCarlo, Doris Tsao, and Yoram Burak for helpful discussions. The work is partially supported by the Gatsby Charitable Foundation, the Swartz Foundation, the Simons Foundation (SCGB Grant No. 325207), the National Institutes of Health (Grant No. 1U19NS104653), the MAFAT Center for Deep Learning, and the Human Frontier Science Program (Project No. RGP0015/2013). D. D. Lee also acknowledges the support of the U.S. National Science Foundation, Army Research Laboratory, Office of Naval Research, Air Force Office of Scientific Research, and Department of Transportation.
Publisher Copyright:
© 2018 authors. Published by the American Physical Society.

PY - 2018/7/5

Y1 - 2018/7/5

N2 - Perceptual manifolds arise when a neural population responds to an ensemble of sensory signals associated with different physical features (e.g., orientation, pose, scale, location, and intensity) of the same perceptual object. Object recognition and discrimination require classifying the manifolds in a manner that is insensitive to variability within a manifold. How neuronal systems give rise to invariant object classification and recognition is a fundamental problem in brain theory as well as in machine learning. Here, we study the ability of a readout network to classify objects from their perceptual manifold representations. We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry, revealing a remarkable relation to the mathematics of conic decomposition. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension, which can explain the classification capacity for manifolds of various geometries. The general theory is demonstrated on a number of representative manifolds, including ℓ2 ellipsoids prototypical of strictly convex manifolds, ℓ1 balls representing polytopes with finite samples, and ring manifolds exhibiting nonconvex continuous structures that arise from modulating a continuous degree of freedom. The effects of label sparsity on the classification capacity of general manifolds are elucidated, displaying a universal scaling relation between label sparsity and the manifold radius. Theoretical predictions are corroborated by numerical simulations using recently developed algorithms to compute maximum margin solutions for manifold dichotomies. Our theory and its extensions provide a powerful and rich framework for applying statistical mechanics of linear classification to data arising from perceptual neuronal responses as well as to artificial deep networks trained for object recognition tasks.

AB - Perceptual manifolds arise when a neural population responds to an ensemble of sensory signals associated with different physical features (e.g., orientation, pose, scale, location, and intensity) of the same perceptual object. Object recognition and discrimination require classifying the manifolds in a manner that is insensitive to variability within a manifold. How neuronal systems give rise to invariant object classification and recognition is a fundamental problem in brain theory as well as in machine learning. Here, we study the ability of a readout network to classify objects from their perceptual manifold representations. We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry, revealing a remarkable relation to the mathematics of conic decomposition. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension, which can explain the classification capacity for manifolds of various geometries. The general theory is demonstrated on a number of representative manifolds, including ℓ2 ellipsoids prototypical of strictly convex manifolds, ℓ1 balls representing polytopes with finite samples, and ring manifolds exhibiting nonconvex continuous structures that arise from modulating a continuous degree of freedom. The effects of label sparsity on the classification capacity of general manifolds are elucidated, displaying a universal scaling relation between label sparsity and the manifold radius. Theoretical predictions are corroborated by numerical simulations using recently developed algorithms to compute maximum margin solutions for manifold dichotomies. Our theory and its extensions provide a powerful and rich framework for applying statistical mechanics of linear classification to data arising from perceptual neuronal responses as well as to artificial deep networks trained for object recognition tasks.

UR - http://www.scopus.com/inward/record.url?scp=85050926076&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85050926076&partnerID=8YFLogxK

U2 - 10.1103/PhysRevX.8.031003

DO - 10.1103/PhysRevX.8.031003

M3 - Article

AN - SCOPUS:85050926076

VL - 8

JO - Physical Review X

JF - Physical Review X

SN - 2160-3308

IS - 3

M1 - 031003

ER -