Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness
Simplicity
Robustness
Regularization
Inductive bias
Complement
DOI:
10.48550/arxiv.2310.06161
Publication Date:
2023-01-01
AUTHORS (3)
ABSTRACT
Neural networks (NNs) are known to exhibit simplicity bias where they tend prefer learning 'simple' features over more 'complex' ones, even when the latter may be informative. Simplicity can lead model making biased predictions which have poor out-of-distribution (OOD) generalization. To address this, we propose a framework that encourages use diverse set of make predictions. We first train simple model, and then regularize conditional mutual information with respect it obtain final model. demonstrate effectiveness this in various problem settings real-world applications, showing effectively addresses leads being used, enhances OOD generalization, improves subgroup robustness fairness. complement these results theoretical analyses effect regularization its generalization properties.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....