Abstract
Interventions to counter misinformation are often less effective for polarizing content on social media platforms. We sought to overcome this limitation by testing an identity-based intervention, which aims to promote accuracy by incorporating normative cues directly into the social media user interface. Across three pre-registered experiments in the US (N = 1709) and UK (N = 804), we found that crowdsourcing accuracy judgements by adding a Misleading count (next to the Like count) reduced participants' reported likelihood to share inaccurate information about partisan issues by 25% (compared with a control condition). The Misleading count was also more effective when it reflected in-group norms (from fellow Democrats/Republicans) compared with the norms of general users, though this effect was absent in a less politically polarized context (UK). Moreover, the normative intervention was roughly five times as effective as another popular misinformation intervention (i.e. the accuracy nudge reduced sharing misinformation by 5%). Extreme partisanship did not undermine the effectiveness of the intervention. Our results suggest that identity-based interventions based on the science of social norms can be more effective than identity-neutral alternatives to counter partisan misinformation in politically polarized contexts (e.g. the US).
Original language | English (US) |
---|---|
Article number | 20230040 |
Journal | Philosophical Transactions of the Royal Society B: Biological Sciences |
Volume | 379 |
Issue number | 1897 |
DOIs | |
State | Published - Mar 11 2024 |
Keywords
- intervention
- misinformation
- social identity
- social media
- social norms
ASJC Scopus subject areas
- General Biochemistry, Genetics and Molecular Biology
- General Agricultural and Biological Sciences