Do Language Models’ Words Refer?

Tal Linzen, Matthew Mandelkern

    Research output: Contribution to journalArticlepeer-review

    Abstract

    What do language models (LMs) do with language? They can produce sequences of (mostly) coherent strings closely resembling English. But do those sentences mean something, or are LMs simply babbling in a convincing simulacrum of language use? We address one aspect of this broad question: whether LMs’ words can refer, that is, achieve “word-to-world” connections. There is prima facie reason to think they do not, since LMs do not interact with the world in the way that ordinary language users do. Drawing on the externalist tradition in philosophy of language, we argue that those appearances are misleading: Even if the inputs to LMs are simply strings of text, they are strings of text with natural histories, and that may suffice for LMs’ words to refer.

    Original languageEnglish (US)
    Pages (from-to)1191-1200
    Number of pages10
    JournalComputational Linguistics
    Volume50
    Issue number3
    DOIs
    StatePublished - Sep 2024

    ASJC Scopus subject areas

    • Language and Linguistics
    • Linguistics and Language
    • Computer Science Applications
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Do Language Models’ Words Refer?'. Together they form a unique fingerprint.

    Cite this