TY - GEN
T1 - Quantificational features in distributional word representations
AU - Linzen, Tal
AU - Dupoux, Emmanuel
AU - Spector, Benjamin
N1 - Funding Information:
We thank Marco Baroni, Emmanuel Chemla, Anne Christophe and Omer Levy for com ments and technical assistance. This research was supported by the European Research Council (grant ERC-2011-AdG 295810 BOOTPHON) and the Agence Nationale pour la Recherche (grants ANR-10-IDEX-0001-02 PSL and ANR-10-LABX-0087 IEC).
PY - 2016
Y1 - 2016
N2 - Do distributional word representations encode the linguistic regularities that theories of meaning argue they should encode? We address this question in the case of the logical properties (monotonicity, force) of quantificational words such as everything (in the object domain) and always (in the time domain). Using the vector offset approach to solving word analogies, we find that the skip-gram model of distributional semantics behaves in a way that is remarkably consistent with encoding these features in some domains, with accuracy approaching 100%, especially with mediumsized context windows. Accuracy in others domains was less impressive. We compare the performance of the model to the behavior of human participants, and find that humans performed well even where the models struggled.
AB - Do distributional word representations encode the linguistic regularities that theories of meaning argue they should encode? We address this question in the case of the logical properties (monotonicity, force) of quantificational words such as everything (in the object domain) and always (in the time domain). Using the vector offset approach to solving word analogies, we find that the skip-gram model of distributional semantics behaves in a way that is remarkably consistent with encoding these features in some domains, with accuracy approaching 100%, especially with mediumsized context windows. Accuracy in others domains was less impressive. We compare the performance of the model to the behavior of human participants, and find that humans performed well even where the models struggled.
UR - http://www.scopus.com/inward/record.url?scp=85036473720&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85036473720&partnerID=8YFLogxK
U2 - 10.18653/v1/s16-2001
DO - 10.18653/v1/s16-2001
M3 - Conference contribution
AN - SCOPUS:85036473720
T3 - *SEM 2016 - 5th Joint Conference on Lexical and Computational Semantics, Proceedings
SP - 1
EP - 11
BT - *SEM 2016 - 5th Joint Conference on Lexical and Computational Semantics, Proceedings
PB - Association for Computational Linguistics (ACL)
T2 - 5th Joint Conference on Lexical and Computational Semantics, *SEM 2016
Y2 - 11 August 2016 through 12 August 2016
ER -