Experimental analysis of privacy loss in DCOP algorithms

Rachel Greenstadt, Jonathan P. Pearce, Emma Bowring, Milind Tambe

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Distributed Constraint Optimization (DCOP) is rapidly emerging as a prominent technique for multiagent coordination. Unfortunately, rigorous quantitative evaluations of privacy loss in DCOP algorithms have been lacking despite the fact that agent privacy is a key motivation for applying DCOPs in many applications. Recently, Maheswaran et al. [3,4] introduced a framework for quantitative evaluations of privacy in DCOP algorithms, showing that early DCOP algorithms lose more privacy than purely centralized approaches and questioning the motivation for applying DCOPs. Do state-of-the art DCOP algorithms suffer from a similar short-coming? This paper answers that question by investigating the most efficient DCOP algorithms, including both DPOP and ADOPT.

    Original languageEnglish (US)
    Title of host publicationProceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems
    Pages1424-1426
    Number of pages3
    DOIs
    StatePublished - 2006
    EventFifth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS - Hakodate, Japan
    Duration: May 8 2006May 12 2006

    Publication series

    NameProceedings of the International Conference on Autonomous Agents
    Volume2006

    Conference

    ConferenceFifth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
    Country/TerritoryJapan
    CityHakodate
    Period5/8/065/12/06

    Keywords

    • Constraint reasoning
    • DCOP
    • Privacy

    ASJC Scopus subject areas

    • General Engineering

    Fingerprint

    Dive into the research topics of 'Experimental analysis of privacy loss in DCOP algorithms'. Together they form a unique fingerprint.

    Cite this