TY - GEN
T1 - Zero-shot relation extraction via reading comprehension
AU - Levy, Omer
AU - Seo, Minjoon
AU - Choi, Eunsol
AU - Zettlemoyer, Luke
N1 - Publisher Copyright:
© 2017 Association for Computational Linguistics.
PY - 2017
Y1 - 2017
N2 - We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.
AB - We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.
UR - http://www.scopus.com/inward/record.url?scp=85048725520&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048725520&partnerID=8YFLogxK
U2 - 10.18653/v1/k17-1034
DO - 10.18653/v1/k17-1034
M3 - Conference contribution
AN - SCOPUS:85048725520
T3 - CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
SP - 333
EP - 342
BT - CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
PB - Association for Computational Linguistics (ACL)
T2 - 21st Conference on Computational Natural Language Learning, CoNLL 2017
Y2 - 3 August 2017 through 4 August 2017
ER -