TY - GEN
T1 - RECIPE
T2 - ACL 2018 Workshop on Machine Reading for Question Answering, MRQA 2018
AU - Shvartzshnaider, Yan
AU - Balashankar, Ananth
AU - Wies, Thomas
AU - Subramanian, Lakshminarayanan
N1 - Funding Information:
We thank Schrasing Tong for his help in the initial stage of this work. This work is supported by the National Science Foundation under grant CCF-1350574.
Publisher Copyright:
© 2018 Association for Computational Linguistics
PY - 2018
Y1 - 2018
N2 - We describe our experiences in using an open domain question answering model (Chen et al., 2017) to evaluate an out-of-domain QA task of assisting in analyzing privacy policies of companies. Specifically, Relevant CI Parameters Extractor (RECIPE) seeks to answer questions posed by the theory of contextual integrity (CI) regarding the information flows described in the privacy statements. These questions have a simple syntactic structure and the answers are factoids or descriptive in nature. The model achieved an F1 score of 72.33, but we noticed that combining the results of this model with a neural dependency parser based approach yields a significantly higher F1 score of 92.35 compared to manual annotations. This indicates that future work which incorporates signals from parsing like NLP tasks more explicitly can generalize better on out-of-domain tasks.
AB - We describe our experiences in using an open domain question answering model (Chen et al., 2017) to evaluate an out-of-domain QA task of assisting in analyzing privacy policies of companies. Specifically, Relevant CI Parameters Extractor (RECIPE) seeks to answer questions posed by the theory of contextual integrity (CI) regarding the information flows described in the privacy statements. These questions have a simple syntactic structure and the answers are factoids or descriptive in nature. The model achieved an F1 score of 72.33, but we noticed that combining the results of this model with a neural dependency parser based approach yields a significantly higher F1 score of 92.35 compared to manual annotations. This indicates that future work which incorporates signals from parsing like NLP tasks more explicitly can generalize better on out-of-domain tasks.
UR - http://www.scopus.com/inward/record.url?scp=85063904089&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063904089&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85063904089
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 71
EP - 77
BT - ACL 2018 - Machine Reading for Question Answering, Proceedings of the Workshop
PB - Association for Computational Linguistics (ACL)
Y2 - 19 July 2018
ER -