Scaling Bayesian Optimization With Game Theory
Benchmark (surveying)
Bayesian Optimization
DOI:
10.48550/arxiv.2110.03790
Publication Date:
2021-01-01
AUTHORS (3)
ABSTRACT
We introduce the algorithm Bayesian Optimization (BO) with Fictitious Play (BOFiP) for optimization of high dimensional black box functions. BOFiP decomposes original, dimensional, space into several sub-spaces defined by non-overlapping sets dimensions. These are randomly generated at start algorithm, and they form a partition dimensions original space. searches alternating BO, within sub-spaces, information exchange among to update sub-space function evaluation. The basic idea is distribute across low where each player in an equal interest game. At iteration, BO produces approximate best replies that players belief distribution. alternate until stopping condition met. High problems common real applications, contributions literature have highlighted difficulty scaling due computational complexity associated estimation model hyperparameters. Such exponential problem dimension, resulting substantial loss performance most techniques increase input dimensionality. compare state-of-the-art approaches field optimization. numerical experiments show over three benchmark objective functions from 20 up 1000 A neural network architecture design tested 42 911 nodes 6 92 layers, respectively, networks 500 10,000 weights. empirically outperforms its competitors, showing consistent different increasing
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....