TY - JOUR
T1 - Short of Suspension
T2 - How Suspension Warnings Can Reduce Hate Speech on Twitter
AU - Yildirim, Mustafa Mikdat
AU - Nagler, Jonathan
AU - Bonneau, Richard
AU - Tucker, Joshua A.
N1 - Funding Information:
Yildrim and Tucker designed the research; Yildrim performed the research; Yildrim, Nagler, Bonneau, and Tucker planned the analyses; Yildrim analyzed data and wrote the first draft of the paper, and all authors contributed to revisions. The authors are thankful to the New York University Center for Social Media and Politics (CSMaP) weekly meetings and the New York University Comparative Politics Workshop for their helpful feedback. The Center for Social Media and Politics for New York University is generously supported by funding from the National Science Foundation, the John S. and James L. Knight Foundation, the Charles Koch Foundation, the Hewlett Foundation, Craig Newmark Philanthropies, the Siegel Family Endowment, and New York University’s Office of the Provost.
Publisher Copyright:
© The Author(s), 2021. Published by Cambridge University Press on behalf of the American Political Science Association.
PY - 2021
Y1 - 2021
N2 - Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.
AB - Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.
UR - http://www.scopus.com/inward/record.url?scp=85120318291&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85120318291&partnerID=8YFLogxK
U2 - 10.1017/S1537592721002589
DO - 10.1017/S1537592721002589
M3 - Article
AN - SCOPUS:85120318291
SN - 1537-5927
JO - Perspectives on Politics
JF - Perspectives on Politics
ER -