- Statistical Methods and Inference
- Statistical Methods and Bayesian Inference
- Bayesian Methods and Mixture Models
- Markov Chains and Monte Carlo Methods
- Particle physics theoretical and experimental studies
- Neutrino Physics Research
- Advanced Statistical Methods and Models
- Machine Learning and Algorithms
- Nuclear physics research studies
- Sparse and Compressive Sensing Techniques
- Random Matrices and Applications
- Optical measurement and interference techniques
- Advanced NMR Techniques and Applications
- Statistical Distribution Estimation and Applications
- Astrophysics and Cosmic Phenomena
- Advanced Combinatorial Mathematics
- Image and Signal Denoising Methods
- Opinion Dynamics and Social Influence
- Radiation Detection and Scintillator Technologies
- Control Systems and Identification
- Game Theory and Voting Systems
- Blind Source Separation Techniques
- Stochastic Gradient Optimization Techniques
- Particle accelerators and beam dynamics
- Structural Health Monitoring Techniques
University of Chicago
2022-2024
Fudan University
2019-2023
University of Washington
2019-2022
Neutrinoless double beta decay (0νββ) is a yet unobserved nuclear process that would demonstrate Lepton number violation, clear evidence of beyond standard model physics. The two neutrino (2νββ) allowed by the and has been measured in numerous experiments. In this Letter, we report measurement 2νββ half-life Mo100 to ground state Ru100 [7.07±0.02(stat)±0.11(syst)]×1018 yr CUPID-Mo experiment. With relative precision ±1.6% most precise date rate Mo100. addition, constrain higher-order...
We propose $\textsf{ScaledGD($\lambda$)}$, a preconditioned gradient descent method to tackle the low-rank matrix sensing problem when true rank is unknown, and possibly ill-conditioned. Using overparametrized factor representations, $\textsf{ScaledGD($\lambda$)}$ starts from small random initialization, proceeds by with specific form of damped preconditioning combat bad curvatures induced overparameterization ill-conditioning. At expense light computational overhead incurred...
Abstract The Bradley–Terry–Luce (BTL) model is a benchmark for pairwise comparisons between individuals. Despite recent progress on the first-order asymptotics of several popular procedures, understanding uncertainty quantification in BTL remains largely incomplete, especially when underlying comparison graph sparse. In this paper, we fill gap by focusing two estimators that have received much attention: maximum likelihood estimator (MLE) and spectral estimator. Using unified proof strategy,...
CUPID-Mo, located in the Laboratoire Souterrain de Modane (France), was a demonstrator for next generation $0\nu\beta\beta$ decay experiment, CUPID. It consisted of an array 20 enriched Li$_{2}$$ ^{100}$MoO$_4$ bolometers and Ge light detectors has demonstrated that technology scintillating with particle identification capabilities is mature. Furthermore, CUPID-Mo can inform validate background prediction In this paper, we present detailed model backgrounds. This able to describe well...
Consider the heteroscedastic nonparametric regression model with random design \begin{equation*}Y_{i}=f(X_{i})+V^{1/2}(X_{i})\varepsilon _{i},\quad i=1,2,\ldots ,n,\end{equation*} $f(\cdot )$ and $V(\cdot $\alpha $- $\beta $-Hölder smooth, respectively. We show that minimax rate of estimating under both local global squared risks is order \begin{equation*}n^{-\frac{8\alpha \beta }{4\alpha +2\alpha +\beta }}\vee n^{-\frac{2\beta }{2\beta +1}},\end{equation*} where $a\vee b:=\max \{a,b\}$ for...
The Gaussian-smoothed optimal transport (GOT) framework, pioneered by Goldfeld et al. and followed up a series of subsequent papers, has quickly caught attention among researchers in statistics, machine learning, information theory, related fields. One key observation made therein is that, adapting to the GOT framework instead its unsmoothed counterpart, curse dimensionality for using empirical measure approximate true data generating distribution can be lifted. current paper shows that...
We establish exponential inequalities for a class of V-statistics under strong mixing conditions. Our theory is developed via novel kernel expansion based on random Fourier features and the use probabilistic method. This type new useful handling many notorious classes kernels.
The Convex Gaussian Min–Max Theorem (CGMT) has emerged as a prominent theoretical tool for analyzing the precise stochastic behavior of various statistical estimators in so-called high-dimensional proportional regime, where sample size and signal dimension are same order. However, well-recognized limitation existing CGMT machinery rests its stringent requirement on exact Gaussianity design matrix, therefore rendering obtained asymptotics, largely specific theory important models. This paper...
Empirical Bayes provides a powerful approach to learning and adapting latent structure in data. Theory algorithms for empirical have rich literature sequence models, but are less understood settings where variables data interact through more complex designs. In this work, we study estimation of an i.i.d. prior Bayesian linear via the nonparametric maximum likelihood estimator (NPMLE). We introduce system gradient flow equations optimizing marginal log-likelihood, jointly over posterior...
In the Gaussian sequence model Y=μ+ξ, we study likelihood ratio test (LRT) for testing H0:μ=μ0 versus H1:μ∈K, where μ0∈K, and K is a closed convex set in Rn. particular, show that under null hypothesis, normal approximation holds log-likelihood statistic general pair (μ0,K), high-dimensional regime estimation error of associated least squares estimator diverges an appropriate sense. The further leads to precise characterization power behavior LRT regime. These characterizations nonuniform...
Distance covariance is a popular dependence measure for two random vectors $X$ and $Y$ of possibly different dimensions types. Recent years have witnessed concentrated efforts in the literature to understand distributional properties sample distance high-dimensional setting, with an exclusive emphasis on null case that are independent. This paper derives first non-null central limit theorem covariance, more general (Hilbert-Schmidt) kernel high dimensions, primarily Gaussian case. The new...
The current experiments searching for neutrinoless double-$\beta$ ($0\nu\beta\beta$) decay also collect large statistics of Standard Model allowed two-neutrino ($2\nu\beta\beta$) events. These can be used to search Beyond (BSM) physics via $2\nu\beta\beta$ spectral distortions. $^{100}$Mo has a natural advantage due its relatively short half-life, allowing higher at equal exposures compared the other isotopes. We demonstrate potential dual read-out bolometric technique exploiting exposure...
Abstract Distance covariance is a popular dependence measure for two random vectors $X$ and $Y$ of possibly different dimensions types. Recent years have witnessed concentrated efforts in the literature to understand distributional properties sample distance high-dimensional setting, with an exclusive emphasis on null case that are independent. This paper derives first non-null central limit theorem covariance, more general (Hilbert–Schmidt) kernel high dimensions, class $(X,Y)$ separable...
Abstract The current experiments searching for neutrinoless double- $$\beta $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>β</mml:mi> </mml:math> ( $$0\nu \beta <mml:mrow> <mml:mn>0</mml:mn> <mml:mi>ν</mml:mi> </mml:mrow> ) decay also collect large statistics of Standard Model allowed two-neutrino $$2\nu <mml:mn>2</mml:mn> events. These can be used to search Beyond (BSM) physics via spectral distortions. 100 Mo has a natural advantage due its relatively short half-life,...
Consider the heteroscedastic nonparametric regression model with random design \begin{align*} Y_i = f(X_i) + V^{1/2}(X_i)\varepsilon_i, \quad i=1,2,\ldots,n, \end{align*} $f(\cdot)$ and $V(\cdot)$ $\alpha$- $\beta$-H\"older smooth, respectively. We show that minimax rate of estimating under both local global squared risks is order n^{-\frac{8\alpha\beta}{4\alpha\beta 2\alpha \beta}} \vee n^{-\frac{2\beta}{2\beta+1}}, where $a\vee b := \max\{a,b\}$ for any two real numbers $a,b$. This result...
In the Gaussian sequence model $Y= \theta_0 + \varepsilon$ in $\mathbb{R}^n$, we study fundamental limit of approximating signal $\theta_0$ by a class $\Theta(d,d_0,k)$ (generalized) splines with free knots. Here $d$ is degree spline, $d_0$ order differentiability at each inner knot, and $k$ maximal number pieces. We show that, given any integer $d\geq 0$ $d_0\in\{-1,0,\ldots,d-1\}$, minimax rate estimation over exhibits following phase transition: \begin{equation*} \begin{aligned}...
Neutrinoless double beta decay ($0\nu\beta\beta$) is a yet unobserved nuclear process which would demonstrate Lepton Number violation, clear evidence of beyond Standard Model physics. The two neutrino ($2\nu\beta\beta)$ allowed by the and has been measured in numerous experiments. In this letter, we report measurement $2\nu\beta\beta$ half-life $^{100}$Mo to ground state $^{100}$Ru $(7.07~\pm~0.02~\text{(stat.)}~\pm~0.11~\text{(syst.)})~\times~10^{18}$~yr CUPID-Mo experiment. With relative...
Le Cam’s third/contiguity lemma is a fundamental probabilistic tool to compute the limiting distribution of given statistic Tn under nonnull sequence probability measures {Qn}, provided its null {Pn} available, and log likelihood ratio {log(dQn/dPn)} has distributional limit. Despite wide-spread applications low-dimensional statistical problems, stringent requirement on limit makes it challenging, or even impossible use in many modern high-dimensional problems. This paper provides...
In the Gaussian sequence model <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Y= \theta _{0} + \varepsilon $ </tex-math></inline-formula> in notation="LaTeX">$\mathbb {R}^{n}$ , we study fundamental limit of statistical estimation when signal notation="LaTeX">$\theta _{0}$ belongs to a class notation="LaTeX">$\Theta _{n}(d,d_{0},k)$ (generalized) splines with free knots located at equally spaced design...
We establish exponential inequalities and Cramer-type moderate deviation theorems for a class of V-statistics under strong mixing conditions. Our theory is developed via kernel expansion based on random Fourier features. This type new useful handling many notorious classes kernels. While the has number applications, we apply it to lasso-type semiparametric regression estimation high-dimensional multiple hypothesis testing.
The half-life of 100Mo relatively to the 2ν2β decay ground state 100Ru was measured as T1/2 = (6.99±0.15) × 1018 yr with help enriched in lithium molybdate scintillating bolometers EDELWEISS-III low background set-up at Modane underground laboratory. This is most accurate value 100Mo.
We establish exponential inequalities for a class of V-statistics under strong mixing conditions. Our theory is developed via novel kernel expansion based on random Fourier features and the use probabilistic method. This type new useful handling many notorious classes kernels.
In the Gaussian sequence model $Y=\mu+\xi$, we study likelihood ratio test (LRT) for testing $H_0: \mu=\mu_0$ versus $H_1: \mu \in K$, where $\mu_0 and $K$ is a closed convex set in $\mathbb{R}^n$. particular, show that under null hypothesis, normal approximation holds log-likelihood statistic general pair $(\mu_0,K)$, high dimensional regime estimation error of associated least squares estimator diverges an appropriate sense. The further leads to precise characterization power behavior LRT...