Self supervised contrastive learning for digital histopathology
Boosting
Supervised Learning
DOI:
10.1016/j.mlwa.2021.100198
Publication Date:
2021-11-06T16:09:28Z
AUTHORS (3)
ABSTRACT
Unsupervised learning has been a long-standing goal of machine and is especially important for medical image analysis, where the can compensate scarcity labeled datasets. A promising subclass unsupervised self-supervised learning, which aims to learn salient features using raw input as signal. In this work, we tackle issue domain-specific without any supervision improve multiple task performances that are interest digital histopathology community. We apply contrastive method by collecting pretraining on 57 datasets labels. find combining multi-organ with different types staining resolution properties improves quality learned features. Furthermore, more images leads better performance in downstream tasks, albeit there diminishing returns unlabeled incorporated into pretraining. Linear classifiers trained top show networks pretrained perform than ImageNet networks, boosting 28% F1 scores average. Interestingly, did not observe consistent correlation between dataset site or organ versus (e.g., only breast does necessarily lead superior breast-related tasks). These findings may also be useful when applying newer techniques data. Pretrained PyTorch models made publicly available at https://github.com/ozanciga/self-supervised-histopathology.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (67)
CITATIONS (112)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....