On the query complexity of sampling from non-log-concave distributions
FOS: Computer and information sciences
Computer Science - Machine Learning
Statistics - Machine Learning
Computer Science - Data Structures and Algorithms
Data Structures and Algorithms (cs.DS)
Machine Learning (stat.ML)
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2502.06200
Publication Date:
2025-02-10
AUTHORS (2)
ABSTRACT
We study the problem of sampling from a $d$-dimensional distribution with density $p(x)\propto e^{-f(x)}$, which does not necessarily satisfy good isoperimetric conditions. Specifically, we show that for any $L,M$ satisfying $LM\ge d\ge 5$, $\epsilon\in \left\{0,\frac{1}{32}\right\}$, and algorithm query accesses to value $f(x)$ $\nabla f(x)$, there exists an $L$-log-smooth second moment at most $M$ such requires $\left\{\frac{LM}{d\epsilon}\right\}^{\Omega(d)}$ queries compute sample whose is within $\epsilon$ in total variation distance target distribution. complement lower bound requiring $\left\{\frac{LM}{d\epsilon}\right\}^{\mathcal O(d)}$ queries, thereby characterizing tight (up constant exponent) complexity family non-log-concave distributions. Our results are sharp contrast recent work Huang et al. (COLT'24), where quasi-polynomial was proposed when $M=\mathtt{poly}(d)$. Their works under stronger condition all distributions along trajectory Ornstein-Uhlenbeck process, starting distribution, $\mathcal O(1)$-log-smooth. investigate this prove it strictly than be Additionally, context mixtures Gaussians. Finally, place our broader theme ``sampling versus optimization'', as studied Ma (PNAS'19). wide range parameters, easier optimization by super-exponential factor dimension $d$.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....