Abstract
Crowdsourcing is becoming a valuable method for companies and researchers to complete scores of micro-tasks by means of open calls on dedicated online platforms. Crowdsourcing results remains unreliable, however, as those platforms neither convey much information about the workers' identity nor do they ensure the quality of the work done. Instead, it is the responsibility of the requester to filter out bad workers, poorly accomplished tasks, and to aggregate worker results in order to obtain a final outcome. In this paper, we first review techniques currently used to detect spammers and malicious workers, whether they are bots or humans randomly or semi-randomly completing tasks; then, we describe the limitations of existing techniques by proposing approaches that individuals, or groups of individuals, could use to attack a task on existing crowdsourcing platforms. We focus on crowdsourcing relevance judgements for search results as a concrete application of our techniques.
Original language | English (US) |
---|---|
Pages (from-to) | 20-25 |
Number of pages | 6 |
Journal | CEUR Workshop Proceedings |
Volume | 842 |
State | Published - 2012 |
Event | 1st International Workshop on Crowdsourcing Web Search, CrowdSearch 2012 - Workshop Held in Conjunction with WWW 2012 Conference - Lyon, France Duration: Apr 17 2012 → Apr 17 2012 |
Keywords
- Adversarial IR
- Crowdsourcing
- Malicious workers
- Spam
ASJC Scopus subject areas
- General Computer Science