TY - JOUR
T1 - Language statistical learning responds to reinforcement learning principles rooted in the striatum
AU - Orpella, Joan
AU - Mas-Herrero, Ernest
AU - Ripollés, Pablo
AU - Marco-Pallarés, Josep
AU - de Diego-Balaguer, Ruth
N1 - Publisher Copyright:
© 2021 Orpella et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2021/9
Y1 - 2021/9
N2 - Statistical learning (SL) is the ability to extract regularities from the environment. In the domain of language, this ability is fundamental in the learning of words and structural rules. In lack of reliable online measures, statistical word and rule learning have been primarily investigated using offline (post-familiarization) tests, which gives limited insights into the dynamics of SL and its neural basis. Here, we capitalize on a novel task that tracks the online SL of simple syntactic structures combined with computational modeling to show that online SL responds to reinforcement learning principles rooted in striatal function. Specifically, we demonstrate—on 2 different cohorts—that a temporal difference model, which relies on prediction errors, accounts for participants’ online learning behavior. We then show that the trial-by-trial development of predictions through learning strongly correlates with activity in both ventral and dorsal striatum. Our results thus provide a detailed mechanistic account of language-related SL and an explanation for the oft-cited implication of the striatum in SL tasks. This work, therefore, bridges the long-standing gap between language learning and reinforcement learning phenomena.
AB - Statistical learning (SL) is the ability to extract regularities from the environment. In the domain of language, this ability is fundamental in the learning of words and structural rules. In lack of reliable online measures, statistical word and rule learning have been primarily investigated using offline (post-familiarization) tests, which gives limited insights into the dynamics of SL and its neural basis. Here, we capitalize on a novel task that tracks the online SL of simple syntactic structures combined with computational modeling to show that online SL responds to reinforcement learning principles rooted in striatal function. Specifically, we demonstrate—on 2 different cohorts—that a temporal difference model, which relies on prediction errors, accounts for participants’ online learning behavior. We then show that the trial-by-trial development of predictions through learning strongly correlates with activity in both ventral and dorsal striatum. Our results thus provide a detailed mechanistic account of language-related SL and an explanation for the oft-cited implication of the striatum in SL tasks. This work, therefore, bridges the long-standing gap between language learning and reinforcement learning phenomena.
UR - http://www.scopus.com/inward/record.url?scp=85114917265&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85114917265&partnerID=8YFLogxK
U2 - 10.1371/journal.pbio.3001119
DO - 10.1371/journal.pbio.3001119
M3 - Article
C2 - 34491980
AN - SCOPUS:85114917265
SN - 1544-9173
VL - 19
JO - PLoS biology
JF - PLoS biology
IS - 9
M1 - e3001119
ER -