AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models
Benchmark (surveying)
DOI:
10.48550/arxiv.2310.03024
Publication Date:
2023-01-01
AUTHORS (14)
ABSTRACT
We present AstroCLIP, a single, versatile model that can embed both galaxy images and spectra into shared, physically meaningful latent space. These embeddings then be used - without any fine-tuning for variety of downstream tasks including (1) accurate in-modality cross-modality semantic similarity search, (2) photometric redshift estimation, (3) property estimation from spectra, (4) morphology classification. Our approach to implementing AstroCLIP consists two parts. First, we separately by pretraining separate transformer-based image spectrum encoders in self-supervised settings. align the using contrastive loss. apply our method Dark Energy Spectroscopic Instrument its corresponding Legacy Imaging Survey. Overall, find remarkable performance on all tasks, even relative supervised baselines. For example, task like prediction, similar specifically-trained ResNet18, additional physical (stellar mass, age, metallicity, sSFR), beat this baseline 19\% terms $R^2$. also compare results state-of-the-art single-modal images, outperforms benchmark roughly factor prediction $R^2$, while remaining in-line Ultimately, represents first cross-modal galaxies, architectures spectra.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....