RECIPE: Applying Open Domain Question Answering to Privacy Policies

Yan Shvartzshnaider, Ananth Balashankar, Thomas Wies, Lakshminarayanan Subramanian

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We describe our experiences in using an open domain question answering model (Chen et al., 2017) to evaluate an out-of-domain QA task of assisting in analyzing privacy policies of companies. Specifically, Relevant CI Parameters Extractor (RECIPE) seeks to answer questions posed by the theory of contextual integrity (CI) regarding the information flows described in the privacy statements. These questions have a simple syntactic structure and the answers are factoids or descriptive in nature. The model achieved an F1 score of 72.33, but we noticed that combining the results of this model with a neural dependency parser based approach yields a significantly higher F1 score of 92.35 compared to manual annotations. This indicates that future work which incorporates signals from parsing like NLP tasks more explicitly can generalize better on out-of-domain tasks.

Original languageEnglish (US)
Title of host publicationACL 2018 - Machine Reading for Question Answering, Proceedings of the Workshop
PublisherAssociation for Computational Linguistics (ACL)
Number of pages7
ISBN (Electronic)9781948087391
StatePublished - 2018
EventACL 2018 Workshop on Machine Reading for Question Answering, MRQA 2018 - Melbourne, Australia
Duration: Jul 19 2018 → …

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X


ConferenceACL 2018 Workshop on Machine Reading for Question Answering, MRQA 2018
Period7/19/18 → …

ASJC Scopus subject areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics


Dive into the research topics of 'RECIPE: Applying Open Domain Question Answering to Privacy Policies'. Together they form a unique fingerprint.

Cite this