Div400
Crowdsourcing
Relevance
Benchmarking
Modalities
Landmark
Relevance Feedback
DOI:
10.1145/2557642.2563670
Publication Date:
2014-03-24T09:45:50Z
AUTHORS (6)
ABSTRACT
In this paper we propose a new dataset, Div400, that was designed to support shared evaluation in different areas of social media photo retrieval, e.g., machine analysis (re-ranking, learning), human-based computation (crowdsourcing) or hybrid approaches (relevance feedback, machine-crowd integration). Div400 comes with associated relevance and diversity assessments performed by human annotators. 396 landmark locations are represented via 43,418 Flickr photos metadata, Wikipedia pages content descriptors for text visual modalities. To facilitate distribution, only Creative Commons included the dataset. The proposed dataset validated during 2013 Retrieving Diverse Social Images Task at MediaEval Benchmarking Initiative Multimedia Evaluation.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (17)
CITATIONS (34)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....