Imprecise Bayesian Neural Networks

Deep Neural Networks
DOI: 10.48550/arxiv.2302.09656 Publication Date: 2023-01-01
ABSTRACT
Uncertainty quantification and robustness to distribution shifts are important goals in machine learning artificial intelligence. Although Bayesian Neural Networks (BNNs) allow for uncertainty the predictions be assessed, different sources of indistinguishable. We present Credal Deep Learning (CBDL). Heuristically, CBDL allows train an (uncountably) infinite ensemble BNNs, using only finitely many elements. This is possible thanks prior likelihood generated credal sets (FGCSs), a concept from imprecise probability literature. Intuitively, convex combinations finite collection prior-likelihood pairs able represent infinitely such pairs. After training, outputs set posteriors on parameters neural network. At inference time, posterior used derive predictive distributions that turn utilized distinguish between aleatoric epistemic uncertainties, quantify them. The also produces either (i) enjoying desirable probabilistic guarantees, or (ii) single output deemed best, is, one having highest lower -- another imprecise-probabilistic concept. more robust than BNNs misspecification, shift. show better at quantifying disentangling types uncertainties Model Averaging. In addition, we apply two case studies demonstrate its downstream tasks capabilities: one, motion prediction autonomous driving scenarios, two, model blood glucose insulin dynamics pancreas control. performs when compared baseline.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....