Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations
Robustness
Fine-tuning
DOI:
10.48550/arxiv.2402.03325
Publication Date:
2024-01-08
AUTHORS (2)
ABSTRACT
Models trained on a labeled source domain (e.g., images from wildlife camera traps) often generalize poorly when deployed an out-of-distribution (OOD) target new trap locations). In the adaptation setting where unlabeled data is available, self-supervised pretraining masked autoencoding or contrastive learning) promising method to mitigate this performance drop. Pretraining improves OOD error generic augmentations used masking cropping) connect and domains, which may be far apart in input space. paper, we show real-world tasks that standard fine-tuning after does not consistently improve over simply training scratch data. To better leverage for distribution shifts, propose Connect Later: with augmentations, fine-tune targeted designed knowledge of shift. learns good representations within while domains during fine-tuning. Later average supervised learning 4 datasets: achieves state-of-the-art astronomical time-series classification (AstroClassification) by 2.5%, species identification (iWildCam-WILDS) ResNet-50 0.9%, tumor (Camelyon17-WILDS) DenseNet121 1.1%; as well best dataset redshift prediction (Redshifts) 0.03 RMSE (11% relative). Code datasets are available at https://github.com/helenqu/connect-later.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....