Applying masked autoencoder-based self-supervised learning for high-capability vision transformers of electrocardiographies
Autoencoder
Benchmark (surveying)
DOI:
10.1371/journal.pone.0307978
Publication Date:
2024-08-14T17:21:58Z
AUTHORS (25)
ABSTRACT
The generalization of deep neural network algorithms to a broader population is an important challenge in the medical field. We aimed apply self-supervised learning using masked autoencoders (MAEs) improve performance 12-lead electrocardiography (ECG) analysis model limited ECG data. pretrained Vision Transformer (ViT) models by reconstructing data with MAE. fine-tuned this MAE-based on ECG-echocardiography from University Tokyo Hospital (UTokyo) for detection left ventricular systolic dysfunction (LVSD), and then evaluated it multi-center external validation seven institutions, employing area under receiver operating characteristic curve (AUROC) assessment. included 38,245 pairs UTokyo 229,439 all institutions. performances were significantly higher than that other Deep Neural Network across cohorts (AUROC, 0.913–0.962 LVSD, p < 0.001). Moreover, we also found improvements depending capacity amount training Additionally, maintained high even benchmark dataset (PTB-XL). Our proposed method developed
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (35)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....