TY - GEN
T1 - Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
AU - Shvartzshnaider, Yan
AU - Tong, Schrasing
AU - Wies, Thomas
AU - Kift, Paula
AU - Nissenbaum, Helen
AU - Subramanian, Lakshminarayanan
AU - Mittal, Prateek
N1 - Funding Information:
∗This work was supported in part by NSF awards number CNS-1355398, CNS-1409415, CNS-1423139, CNS-1553437, and CNS-1617286. Copyright ©c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Publisher Copyright:
Copyright © 2016 Association for the Advancement of Artificial Intelligence.
PY - 2016/11/3
Y1 - 2016/11/3
N2 - Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. Contextual integrity (CI) (Nissenbaum 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In the following, we propose a crowdsourcing method for the automated discovery of contextual norms. To evaluate the effectiveness and scalability of our approach, we conducted an extensive survey on Amazon's Mechanical Turk (AMT) with more than 450 participants and 1400 questions. The paper has three main takeaways: First, we demonstrate the ability to generate survey questions corresponding to privacy norms within any context. Second, we show that crowdsourcing enables the discovery of norms from these questions with strong majoritarian consensus among users. Finally, we demonstrate how the norms thus discovered can be encoded into a formal logic to automatically verify their consistency.
AB - Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. Contextual integrity (CI) (Nissenbaum 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In the following, we propose a crowdsourcing method for the automated discovery of contextual norms. To evaluate the effectiveness and scalability of our approach, we conducted an extensive survey on Amazon's Mechanical Turk (AMT) with more than 450 participants and 1400 questions. The paper has three main takeaways: First, we demonstrate the ability to generate survey questions corresponding to privacy norms within any context. Second, we show that crowdsourcing enables the discovery of norms from these questions with strong majoritarian consensus among users. Finally, we demonstrate how the norms thus discovered can be encoded into a formal logic to automatically verify their consistency.
UR - http://www.scopus.com/inward/record.url?scp=85016462836&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85016462836&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85016462836
T3 - Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
SP - 209
EP - 218
BT - Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
A2 - Ghosh, Arpita
A2 - Lease, Matthew
PB - AAAI press
T2 - 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
Y2 - 30 October 2016 through 3 November 2016
ER -