Learning from multiple annotators with varying expertise
0202 electrical engineering, electronic engineering, information engineering
02 engineering and technology
DOI:
10.1007/s10994-013-5412-1
Publication Date:
2013-10-18T17:31:10Z
AUTHORS (5)
ABSTRACT
Learning from multiple annotators or knowledge sources has become an important problem in machine learning and data mining. This is in part due to the ease with which data can now be shared/collected among entities sharing a common goal, task, or data source; and additionally the need to aggregate and make inferences about the collected information. This paper focuses on the development of probabilistic approaches for statistical learning in this setting. It specially considers the case when annotators may be unreliable, but also when their expertise vary depending on the data they observe. That is, annotators may have better knowledge about different parts of the input space and therefore be inconsistently accurate across the task domain. The models developed address both the supervised and the semi-supervised settings and produce classification and annotator models that allow us to provide estimates of the true labels and annotator expertise when no ground-truth is available. In addition, we provide an analysis of the proposed models, tasks, and related practical problems under various scenarios. In particular, we address how to evaluate annotators and how to consider cases where some ground-truth may be available. We show experimentally that annotator expertise can indeed vary in real tasks and that the presented approaches provide clear advantages over previously introduced multi-annotator methods, which only consider input-independent annotator characteristics, and over alternative approaches that do not model multiple annotators.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (33)
CITATIONS (108)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....