How Does Code Pretraining Affect Language Model Task Performance?

Jackson Petty, Sjoerd van Steenkiste, Tal Linzen

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Large language models are increasingly trained on corpora containing both natural language and non-linguistic data like source code. Aside from aiding programming-related tasks, anecdotal evidence suggests that including code in pretraining corpora may improve performance on other, unrelated tasks, yet to date no work has been able to establish a causal connection by controlling between language and code data. Here we do just this. We pretrain language models on datasets which interleave natural language and code in two different settings: competitive, in which the total volume of data seen during pretraining is held constant; and additive, in which the volume of language data is held constant. We study how the pretraining mixture affects performance on (a) compositionality, measured by generalization accuracy on semantic parsing and syntactic transformation tasks, and more broadly on (b) downstream non-code-related objectives, measured by performance on tasks from the BigBench benchmark. We find that pretraining on higher proportions of code improves performance on compositional tasks involving structured output (like semantic parsing), and mathematics. Conversely, increased code mixture can harm performance on other tasks, including on tasks that require sensitivity to linguistic structure such as syntax or morphology, and tasks measuring real-world knowledge.

    Original languageEnglish (US)
    JournalTransactions on Machine Learning Research
    Volume2025
    StatePublished - 2025

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Computer Vision and Pattern Recognition

    Fingerprint

    Dive into the research topics of 'How Does Code Pretraining Affect Language Model Task Performance?'. Together they form a unique fingerprint.

    Cite this