Crowd sourcing and human computation has slowly become a mainstay for many application areas that seek to leverage the crowd in the development of high quality datasets, annotations, and problem… Click to show full abstract
Crowd sourcing and human computation has slowly become a mainstay for many application areas that seek to leverage the crowd in the development of high quality datasets, annotations, and problem solving beyond the reach of current AI solutions. One of the major challenges to the domain is ensuring high-quality and diligent work. In response, the literature has seen a large number of quality control mechanisms each voicing (sometimes domain-specific) benefits and advantages when deployed in largescale human computation projects. This creates a complex design space for practitioners: it is not always clear which mechanism(s) to use for maximal quality control. In this article, we argue that this decision is perhaps overinflated and that provided there is “some kind” of quality control that this obviously known to crowd workers this is sufficient for “high-quality” solutions. To evidence this, and provide a basis for discussion, we undertake two experiments where we explore the relationship between task design, task complexity, quality control and solution quality. We do this with tasks from natural language processing, and image recognition of varying complexity. We illustrate that minimal quality control is enough to repel constantly underperforming contributors and that this is constant across tasks of varying complexity and formats. Our key takeaway: quality control is necessary, but seemingly not how it is implemented.
               
Click one of the above tabs to view related content.