On The Computational Complexity of Self-Attention

Feyza Duman Keles, Pruthuvi Mahesakya Wijewardena, Chinmay Hegde

    Research output: Contribution to journalConference articlepeer-review

    Abstract

    Transformer architectures have led to remarkable progress in many state-of-art applications. However, despite their successes, modern transformers rely on the self-attention mechanism, whose time- and space-complexity is quadratic in the length of the input. Several approaches have been proposed to speed up self-attention mechanisms to achieve sub-quadratic running time; however, the large majority of these works are not accompanied by rigorous error guarantees. In this work, we establish lower bounds on the computational complexity of self-attention in a number of scenarios. We prove that the time complexity of self-attention is necessarily quadratic in the input length, unless the Strong Exponential Time Hypothesis (SETH) is false. This argument holds even if the attention computation is performed only approximately, and for a variety of attention mechanisms. As a complement to our lower bounds, we show that it is indeed possible to approximate dot-product self-attention using finite Taylor series in linear-time, at the cost of having an exponential dependence on the polynomial order.

    Original languageEnglish (US)
    Pages (from-to)597-619
    Number of pages23
    JournalProceedings of Machine Learning Research
    Volume201
    StatePublished - 2023
    Event34th International Conference onAlgorithmic Learning Theory, ALT 2023 - Singapore, Singapore
    Duration: Feb 20 2023Feb 23 2023

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Software
    • Control and Systems Engineering
    • Statistics and Probability

    Fingerprint

    Dive into the research topics of 'On The Computational Complexity of Self-Attention'. Together they form a unique fingerprint.

    Cite this