Transformers, Parallel Computation, and Logarithmic Depth

Clayton Sanford, Daniel Hsu, Matus Telgarsky

Research output: Contribution to journalConference articlepeer-review

Abstract

We show that a constant number of self-attention layers can efficiently simulate—and be simulated by—a constant number of communication rounds of Massively Parallel Computation, a popular model of distributed computing with wide-ranging algorithmic results. As a consequence, we show that logarithmic depth is sufficient for transformers to solve basic computational tasks that cannot be efficiently solved by several other neural sequence models and sub-quadratic transformer approximations. We thus establish parallelism as a key distinguishing property of transformers.

Original languageEnglish (US)
Pages (from-to)43276-43327
Number of pages52
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: Jul 21 2024Jul 27 2024

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Transformers, Parallel Computation, and Logarithmic Depth'. Together they form a unique fingerprint.

Cite this