Group recommendations via multi-armed bandits

José Bento, Stratis Ioannidis, S. Muthukrishnan, Jinyun Yan

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    We study recommendations for persistent groups that repeatedly engage in a joint activity. We approach this as a multi-arm bandit problem. We design a recommendation policy and show it has logarithmic regret. Our analysis also shows that regret depends linearly on d, the size of the underlying persistent group. We evaluate our policy on movie recommendations over the MovieLens and MoviePilot datasets. Copyright is held by the author/owner(s).

    Original languageEnglish (US)
    Title of host publicationWWW'12 - Proceedings of the 21st Annual Conference on World Wide Web Companion
    Pages463-464
    Number of pages2
    DOIs
    StatePublished - 2012
    Event21st Annual Conference on World Wide Web, WWW'12 - Lyon, France
    Duration: Apr 16 2012Apr 20 2012

    Publication series

    NameWWW'12 - Proceedings of the 21st Annual Conference on World Wide Web Companion

    Conference

    Conference21st Annual Conference on World Wide Web, WWW'12
    Country/TerritoryFrance
    CityLyon
    Period4/16/124/20/12

    Keywords

    • Group recommendation
    • Multi-armed bandits

    ASJC Scopus subject areas

    • Computer Networks and Communications

    Fingerprint

    Dive into the research topics of 'Group recommendations via multi-armed bandits'. Together they form a unique fingerprint.

    Cite this