Regularized boosting with an increasing coefficient magnitude stop criterion as meta-learner in hyperparameter optimization stacking ensemble

Hyperparameter Boosting Ensemble Learning
DOI: 10.1016/j.neucom.2023.126516 Publication Date: 2023-07-06T06:18:02Z
ABSTRACT
In Hyperparameter Optimization (HPO), only the hyperparameter configuration with best performance is chosen after performing several trials, then, discarding effort of training all models every trial and an ensemble them. This consists simply averaging model predictions or weighting by a certain probability. Recently, other more sophisticated strategies, such as Caruana method stacking strategy has been proposed. On one hand, performs well in HPO ensemble, since it not affected effects multicollinearity, which prevalent HPO. It just computes average over subset replacement. But does benefit from generalization power learning process. methods include procedure meta-learner required to perform ensemble. Yet, hardly finds advice about adequate. Besides, some meta-learners may suffer multicollinearity need be tuned reduce paper explores for HPO, free tuning, able considering process power. At this respect, boosting seems promising meta-learner. fact, completely removes multicollinearity. also proposes implicit regularization classical novel non-parametric stop criterion suitable specifically designed The synergy between these two improvements exhibits competitive predictive compared existing approaches than
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (89)
CITATIONS (4)