On Single Index Models beyond Gaussian Data

Joan Bruna, Loucas Pillaud-Vivien, Aaron Zweig

Research output: Contribution to journalConference articlepeer-review

Abstract

Sparse high-dimensional functions have arisen as a rich framework to study the behavior of gradient-descent methods using shallow neural networks, showcasing their ability to perform feature learning beyond linear models. Amongst those functions, the simplest are single-index models f(x) = ϕ(x · θ), where the labels are generated by an arbitrary non-linear scalar link function ϕ applied to an unknown one-dimensional projection θ of the input data. By focusing on Gaussian data, several recent works have built a remarkable picture, where the so-called information exponent (related to the regularity of the link function) controls the required sample complexity. In essence, these tools exploit the stability and spherical symmetry of Gaussian distributions. In this work, building from the framework of Ben Arous et al. [2021], we explore extensions of this picture beyond the Gaussian setting, where both stability or symmetry might be violated. Focusing on the planted setting where ϕ is known, our main results establish that Stochastic Gradient Descent can efficiently recover the unknown direction θ in the high-dimensional regime, under assumptions that extend previous works Yehudai and Shamir [2020], Wu [2022].

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'On Single Index Models beyond Gaussian Data'. Together they form a unique fingerprint.

Cite this