Dual cross-media relevance model for image annotation
Relevance
DOI:
10.1145/1291233.1291380
Publication Date:
2007-10-14T12:51:38Z
AUTHORS (7)
ABSTRACT
Image annotation has been an active research topic in recent years due to its potential impact on both image understanding and web retrieval. Existing relevance-model-based methods perform by maximizing the joint probability of images words, which is calculated expectation over training images. However, semantic gap dependence data restrict their performance scalability. In this paper, a dual cross-media relevance model (DCMRM) proposed for automatic annotation, estimates words pre-defined lexicon. DCMRM involves two kinds critical relations annotation. One word-to-image relation other word-to-word relation. Both can be estimated using search techniques as well available data. Experiments conducted Corel dataset demonstrate effectiveness model.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (21)
CITATIONS (89)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....