Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks
Crowdsourcing
Sample (material)
Repeatability
Crowdsourcing software development
DOI:
10.1609/hcomp.v7i1.5264
Publication Date:
2022-10-18T11:33:56Z
AUTHORS (4)
ABSTRACT
Crowdsourcing platforms provide a convenient and scalable way to collect human-generated labels on-demand. This data can be used train Artificial Intelligence (AI) systems or evaluate the effectiveness of algorithms. The datasets generated by means crowdsourcing are, however, dependent on many factors that affect their quality. These include, among others, population sample bias introduced aspects like task reward, requester reputation, other filters design.In this paper, we analyse platform-related study how they dataset characteristics running longitudinal where compare reliability results collected with repeated experiments over time across platforms. Results show that, under certain conditions: 1) replicated different result in significantly quality levels while 2) from is stable within same platform. We identify some key design variables cause such variations propose an experimentally validated set actions counteract these effects thus achieving reliable repeatable crowdsourced collection experiments.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (5)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....