Unsupervised Learning of Structured Representations via Closed-Loop Transcription

Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann LeCun, Yi Ma

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes. While most existing unsupervised learning approaches focus on a representation for only one of these two goals, we show that a unified representation can enjoy the mutual benefits of having both. Such a representation is attainable by generalizing the recently proposed closed-loop transcription framework, known as CTRL, to the unsupervised setting. This entails solving a constrained maximin game over a rate reduction objective that expands features of all samples while compressing features of augmentations of each sample. Through this process, we see discriminative low-dimensional structures emerge in the resulting representations. Under comparable experimental conditions and network complexities, we demonstrate that these structured representations enable classification performance close to state-of-the-art unsupervised discriminative representations, and conditionally generated image quality significantly higher than that of state-of-the-art unsupervised generative models.

Original languageEnglish (US)
Pages (from-to)440-457
Number of pages18
JournalProceedings of Machine Learning Research
Volume234
StatePublished - 2024
Event1st Conference on Parsimony and Learning, CPAL 2024 - Hongkong, China
Duration: Jan 3 2024Jan 6 2024

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Unsupervised Learning of Structured Representations via Closed-Loop Transcription'. Together they form a unique fingerprint.

Cite this