Scalable Bayesian optimization with high-dimensional outputs using randomized prior networks
Bayesian Optimization
Black box
DOI:
10.48550/arxiv.2302.07260
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
Several fundamental problems in science and engineering consist of global optimization tasks involving unknown high-dimensional (black-box) functions that map a set controllable variables to the outcomes an expensive experiment. Bayesian Optimization (BO) techniques are known be effective tackling using relatively small number objective function evaluations, but their performance suffers when dealing with outputs. To overcome major challenge dimensionality, here we propose deep learning framework for BO sequential decision making based on bootstrapped ensembles neural architectures randomized priors. Using appropriate architecture choices, show proposed can approximate functional relationships between design quantities interest, even cases where latter take values vector spaces or infinite-dimensional spaces. In context BO, augmented probabilistic surrogates re-parameterized Monte Carlo approximations multiple-point (parallel) acquisition functions, as well methodological extensions accommodating black-box constraints multi-fidelity information sources. We test against state-of-the-art methods demonstrate superior across several challenging outputs, including constrained task shape rotor blades turbo-machinery.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....