Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

Zeyu Yun, Yubei Chen, Bruno A. Olshausen, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these ‘black boxes’ as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at https://github.com/zeyuyun1/TransformerVis.

Original languageEnglish (US)
Title of host publicationDeep Learning Inside Out
Subtitle of host publication2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO 2021 - Proceedings, co-located with the Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2021
EditorsEneko Agirre, Marianna Apidianaki, Ivan Vulic
PublisherAssociation for Computational Linguistics (ACL)
Pages1-10
Number of pages10
ISBN (Electronic)9781954085305
StatePublished - 2021
Event2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures: Deep Learning Inside Out, DeeLIO 2021 - Virtual, Online
Duration: Jun 10 2021 → …

Publication series

NameDeep Learning Inside Out: 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO 2021 - Proceedings, co-located with the Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2021

Conference

Conference2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures: Deep Learning Inside Out, DeeLIO 2021
CityVirtual, Online
Period6/10/21 → …

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors'. Together they form a unique fingerprint.

Cite this