TY - GEN
T1 - Transformer visualization via dictionary learning
T2 - 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures: Deep Learning Inside Out, DeeLIO 2021
AU - Yun, Zeyu
AU - Chen, Yubei
AU - Olshausen, Bruno A.
AU - LeCun, Yann
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these ‘black boxes’ as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at https://github.com/zeyuyun1/TransformerVis.
AB - Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these ‘black boxes’ as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at https://github.com/zeyuyun1/TransformerVis.
UR - http://www.scopus.com/inward/record.url?scp=85123686360&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123686360&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85123686360
T3 - Deep Learning Inside Out: 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO 2021 - Proceedings, co-located with the Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2021
SP - 1
EP - 10
BT - Deep Learning Inside Out
A2 - Agirre, Eneko
A2 - Apidianaki, Marianna
A2 - Vulic, Ivan
PB - Association for Computational Linguistics (ACL)
Y2 - 10 June 2021
ER -