Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting

Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs-e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always “(A)”-which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.

    Original languageEnglish (US)
    Title of host publicationAdvances in Neural Information Processing Systems 36 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
    EditorsA. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, S. Levine
    PublisherNeural information processing systems foundation
    ISBN (Electronic)9781713899921
    StatePublished - 2023
    Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
    Duration: Dec 10 2023Dec 16 2023

    Publication series

    NameAdvances in Neural Information Processing Systems
    Volume36
    ISSN (Print)1049-5258

    Conference

    Conference37th Conference on Neural Information Processing Systems, NeurIPS 2023
    Country/TerritoryUnited States
    CityNew Orleans
    Period12/10/2312/16/23

    ASJC Scopus subject areas

    • Computer Networks and Communications
    • Information Systems
    • Signal Processing

    Fingerprint

    Dive into the research topics of 'Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting'. Together they form a unique fingerprint.

    Cite this