The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation
FOS: Computer and information sciences
Computer Science - Machine Learning
03 medical and health sciences
0302 clinical medicine
9. Industry and infrastructure
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2206.06487
Publication Date:
2022-01-01
AUTHORS (4)
ABSTRACT
Crossmodal knowledge distillation (KD) extends traditional knowledge distillation to the area of multimodal learning and demonstrates great success in various applications. To achieve knowledge transfer across modalities, a pretrained network from one modality is adopted as the teacher to provide supervision signals to a student network learning from another modality. In contrast to the empirical success reported in prior works, the working mechanism of crossmodal KD remains a mystery. In this paper, we present a thorough understanding of crossmodal KD. We begin with two case studies and demonstrate that KD is not a universal cure in crossmodal knowledge transfer. We then present the modality Venn diagram to understand modality relationships and the modality focusing hypothesis revealing the decisive factor in the efficacy of crossmodal KD. Experimental results on 6 multimodal datasets help justify our hypothesis, diagnose failure cases, and point directions to improve crossmodal knowledge transfer in the future.<br/>Accepted by ICLR 2023 (top-5%). The first three authors contribute equally. Project website: https://zihuixue.github.io/MFH/index.html<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....