Structure-preserving contrastive learning for spatial time series

FOS: Computer and information sciences Computer Science - Machine Learning Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2502.06380 Publication Date: 2025-02-10
ABSTRACT
Informative representations enhance model performance and generalisability in downstream tasks. However, learning self-supervised for spatially characterised time series, like traffic interactions, poses challenges as it requires maintaining fine-grained similarity relations the latent space. In this study, we incorporate two structure-preserving regularisers contrastive of spatial series: one regulariser preserves topology similarities between instances, other graph geometry across temporal dimensions. To balance structure preservation, propose a dynamic mechanism that adaptively weighs trade-off stabilises training. We conduct experiments on multivariate series classification, well macroscopic microscopic prediction. For all three tasks, our approach structures more effectively improves state-of-the-art task performances. The proposed can be applied to an arbitrary encoder is particularly beneficial with or geographical features. Furthermore, study suggests higher preservation indicates informative useful representations. This may help understand contribution representation pattern recognition neural networks. Our code made openly accessible resulting data at https://github.com/yiru-jiao/spclt.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....