Task-Customized Self-Supervised Pre-training with Scalable Dynamic Routing
Pascal (unit)
Training set
DOI:
10.1609/aaai.v36i2.20079
Publication Date:
2022-07-04T10:31:57Z
AUTHORS (7)
ABSTRACT
Self-supervised learning (SSL), especially contrastive methods, has raised attraction recently as it learns effective transferable representations without semantic annotations. A common practice for self-supervised pre-training is to use much data possible. For a specific downstream task, however, involving irrelevant in may degenerate the performance, observed from our extensive experiments. On other hand, existing SSL burdensome and infeasible different downstream-task-customized datasets tasks. To address this issue, we propose novel paradigm called Scalable Dynamic Routing (SDR), which can be trained once deployed efficiently tasks with task-customized pre-trained models. Specifically, construct SDRnet various sub-nets train each sub-net only one subset of by data-aware progressive training. When task arrives, route among all get best along its corresponding weights. Experiment results show that SDR 256 on ImageNet simultaneously, provides better transfer performance than unified model full ImageNet, achieving state-of-the-art (SOTA) averaged accuracy over 11 classification AP PASCAL VOC detection task.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....