Machine-crowd-expert model for increasing user engagement and annotation quality

Ana Elisa Méndez Méndez, Mark Cartwright, Juan Pablo Bello

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Crowdsourcing and active learning have been combined in the past with the goal of reducing annotation costs. However, two issues arise with using AL and crowdsourcing: quality of the labels and user engagement. In this work, we propose a novel machine ⇔ crowd ⇔ expert loop model where the forward connections of the loop aim to improve the quality of the labels and the backward connections aim to increase user engagement. In addition, we propose a research agenda for evaluating the model.

Original languageEnglish (US)
Title of host publicationCHI EA 2019 - Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450359719
DOIs
StatePublished - May 2 2019
Event2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019 - Glasgow, United Kingdom
Duration: May 4 2019May 9 2019

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019
Country/TerritoryUnited Kingdom
CityGlasgow
Period5/4/195/9/19

Keywords

  • Active learning
  • Crowdsourcing
  • Sound classification

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Machine-crowd-expert model for increasing user engagement and annotation quality'. Together they form a unique fingerprint.

Cite this