M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation

Overfitting Sample (material)
DOI: 10.48550/arxiv.2303.00039 Publication Date: 2023-01-01
ABSTRACT
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks by ``overfitting" specific task type, leading enhanced performance compared analytical optimizers. Generally, L2O develops a parameterized method (i.e., ``optimizer") learning from solving sample problems. This data-driven yields that can efficiently solve problems similar those seen in training, is, same ``task distribution". However, such learned optimizers struggle when new test come with substantially deviation training distribution. paper investigates potential solution this open challenge, meta-training an optimizer perform fast test-time self-adaptation out-of-distribution task, only few steps. We theoretically characterize generalization L2O, and further show our proposed framework (termed M-L2O) provably facilitates rapid adaptation locating well-adapted initial points for weight. Empirical observations on several classic like LASSO Quadratic, demonstrate M-L2O converges significantly faster than vanilla $5$ steps adaptation, echoing theoretical results. Codes are available https://github.com/VITA-Group/M-L2O.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....