- Sparse and Compressive Sensing Techniques
- Advanced Optimization Algorithms Research
- Stochastic Gradient Optimization Techniques
- Machine Learning and Algorithms
- Optimization and Variational Analysis
- Matrix Theory and Algorithms
- Statistical Methods and Inference
- Seismic Imaging and Inversion Techniques
- Gaussian Processes and Bayesian Inference
- Photoacoustic and Ultrasonic Imaging
- Image and Signal Denoising Methods
- Machine Learning and Data Classification
- Privacy-Preserving Technologies in Data
- Neural Networks and Applications
- Probabilistic and Robust Engineering Design
- Model Reduction and Neural Networks
- Medical Imaging Techniques and Applications
- Numerical methods in inverse problems
- Microwave Imaging and Scattering Analysis
- Electrical and Bioimpedance Tomography
- Geological and Geophysical Studies
- Advanced Bandit Algorithms Research
- Advanced Control Systems Optimization
- Advanced X-ray Imaging Techniques
- Quantum Computing Algorithms and Architecture
University of British Columbia
2012-2024
QLT (Canada)
2022
1QBit
2022
Huawei Technologies (China)
2021
University of California, Davis
2015-2016
Science and Technology Facilities Council
2007
Stanford University
2007
Mercy Medical Center
2005
Argonne National Laboratory
2003
Cornell University
1994
The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis denoise (BPDN) fits the only approximately, and single parameter determines curve that traces optimal trade-off between fit solution. We prove this is convex continuously differentiable over all points interest, show it gives explicit relationship to two other optimization problems closely related BPDN. describe root-finding algorithm for finding arbitrary on curve; suitable are...
Many structured data-fitting applications require the solution of an optimization problem involving a sum over potentially large number measurements. Incremental gradient algorithms offer inexpensive iterations by sampling subset terms in sum. These methods can make great progress initially, but often slow as they approach solution. In contrast, full-gradient achieve steady convergence at expense evaluating full objective and on each iteration. We explore hybrid that exhibit benefits both...
The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted a subset rows. This is an extension the single-measurement-vector (SMV) widely studied in sensing. We analyze properties for two types algorithms. First, we show that using sum-of-norm minimization cannot exceed uniform rate sequential SMV $\ell_1$ minimization, and there are problems can be solved one approach but not other. Second, performance...
We study recovery conditions of weighted <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">l</i> <sub xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> minimization for signal reconstruction from compressed sensing measurements when partial support information is available. show that if at least 50% the (partial) accurate, then stable and robust under weaker sufficient than analogous standard minimization. Moreover, provides better upper bounds on...
The use of convex optimization for the recovery sparse signals from incomplete or compressed data is now common practice. Motivated by success basis pursuit in recovering vectors, new formulations have been proposed that take advantage different types sparsity. In this paper we propose an efficient algorithm solving a general class sparsifying formulations. For several sparsity provide applications, along with details on how to apply algorithm, and experimental results.
There has been significant recent work on the theory and application of randomized coordinate descent algorithms, beginning with Nesterov [SIAM J. Optim., 22(2), 2012], who showed that a random-coordinate selection rule achieves same convergence rate as Gauss-Southwell rule. This result suggests we should never use rule, it is typically much more expensive than random selection. However, empirical behaviours these algorithms contradict this theoretical result: in applications where...
The goal of minimizing misclassification error on a training set is often just one several real-world goals that might be defined different datasets. For example, may require classifier to also make positive predictions at some specified rate for subpopulation (fairness), or achieve empirical recall. Other include reducing churn with respect previously deployed model, stabilizing online training. In this paper we propose handling multiple datasets by dataset constraints, using the ramp...
The regularization of a convex program is exact if all solutions the regularized problem are also original for values parameter below some positive threshold. For general program, we show that and only certain selection has Lagrange multiplier. Moreover, threshold inversely related to We use this result generalize an Ferris Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266–273] involving linearized problem. it derive necessary sufficient conditions penalization, similar those obtained by...
Many seismic exploration techniques rely on the collection of massive data volumes that are mined for information during processing. This approach has been extremely successful, but current efforts toward higher resolution images in increasingly complicated regions Earth continue to reveal fundamental shortcomings our typical workflows. The "curse" dimensionality is main roadblock and exemplified by Nyquist's sampling criterion, which disproportionately strains acquisition processing systems...
Federated learning is an emerging decentralized machine scheme that allows multiple data owners to work collaboratively while ensuring privacy. The success of federated depends largely on the participation owners. To sustain and encourage owners' participation, it crucial fairly evaluate quality provided by reward them correspondingly. Shapley value, recently proposed Wang et al. [Federated Learning, 2020], a measure for value under framework satisfies many desired properties valuation....
Non-negative tensor factorization (NTF) is a technique for computing parts-based representation of high-dimensional data. NTF excels at exposing latent structures in datasets, and finding good low-rank approximations to the We describe an approach dataset that relies only on iterative linear-algebra techniques comparable cost non-negative matrix (NMF). (The better-known NMF special case also handled by our implementation.) Some important features implementation include mechanisms encouraging...
Gauge functions significantly generalize the notion of a norm, and gauge optimization, as defined by [R. M. Freund, Math. Programming, 38 (1987), pp. 47--67], seeks element convex set that is minimal with respect to function. This conceptually simple problem can be used model remarkable array useful problems, including special case conic related problems arise in machine learning signal processing. The structure these allows for kind duality framework. paper explores framework proposed...
We propose a relaxation scheme for mathematical programs with equilibrium constraints (MPECs). In contrast to previous approaches, our is two-sided: both the complementarity and nonnegativity are relaxed. The proposed update rule guarantees (under certain conditions) that sequence of relaxed subproblems will maintain strictly feasible interior---even in limit. show how can be used combination standard interior-point method achieve superlinear convergence. Numerical results on MacMPEC test...
Geophysical inverse problems typically involve a trade-off between data misfit and some prior model. Pareto curves trace the optimal these two competing aims. These are used commonly in with two-norm priors which they plotted on log-log scale known as L-curves. For other priors, such sparsity-promoting one-norm prior, remain relatively unexplored. We show how lead to new insights into regularization. First, we confirm theoretical properties of smoothness convexity from stylized geophysical...
Many fields of physics use quantum Monte Carlo techniques, but struggle to estimate dynamic spectra via the analytic continuation imaginary-time data. One most ubiquitous approaches is maximum entropy method (MEM). We supply a dual Newton optimization algorithm be used within MEM and provide bounds for algorithm's error. The typically with Bryan's controversial [Rothkopf, "Bryan's Maximum Entropy Method" Data 5.3 (2020)]. present new theoretical issues that are not yet in literature. Our has...
Detrital petrochronology is a powerful method of characterizing sediment and potentially sources. The recently developed Tucker-1 decomposition holds promise using detrital to identify both sediment-source characteristics the proportions in which sources are present sink samples even when unknown or unavailable for sampling. However, correlation between endmember lithological sedimentary processes has not been established. Herein we case study multivariate geochemical data set from zircons...