What Is the Best Practice for CNNs Applied to Visual Instance Retrieval?
Discriminative model
Feature (linguistics)
Representation
ENCODE
Feature Learning
DOI:
10.48550/arxiv.1611.01640
Publication Date:
2016-01-01
AUTHORS (4)
ABSTRACT
Previous work has shown that feature maps of deep convolutional neural networks (CNNs) can be interpreted as representation a particular image region. Features aggregated from these have been exploited for retrieval tasks and achieved state-of-the-art performances in recent years. The key to the success such methods is representation. However, different factors impact effectiveness features are still not explored thoroughly. There much less discussion about best combination them. main contribution our paper thorough evaluations various affect discriminative ability extracted CNNs. Based on evaluation results, we also identify choices propose new multi-scale method encode effectively. Finally, show proposed generalises well outperforms four typical datasets used visual instance retrieval.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....