Stochastic Zeroth-Order Multi-Gradient Algorithm for Multi-Objective Optimization
Proximal Gradient Methods
Optimization algorithm
DOI:
10.3390/math13040627
Publication Date:
2025-02-14T16:14:41Z
AUTHORS (6)
ABSTRACT
Multi-objective optimization (MOO) has become an important method in machine learning, which involves solving multiple competing objective problems simultaneously. Nowadays, many MOO algorithms assume that gradient information is easily available and use this to optimize functions. However, when encountering situations where gradients are not available, such as black-box functions or non-differentiable functions, these ineffective. In paper, we propose a zeroth-order algorithm named SZMG (stochastic multi-gradient algorithm), approximates the of by finite difference methods. Meanwhile, avoid conflicting between reduce stochastic direction bias caused gradients, SGD-type adopted acquire weight parameters. Under non-convex setting mild assumptions, convergence rate established for algorithm. Simulation results demonstrate effectiveness
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (48)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....