Qiuwei Li

ORCID: 0000-0002-2306-6649
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Sparse and Compressive Sensing Techniques
  • Blind Source Separation Techniques
  • Advanced Optimization Algorithms Research
  • Tensor decomposition and applications
  • Microwave Imaging and Scattering Analysis
  • Face and Expression Recognition
  • Matrix Theory and Algorithms
  • Stochastic Gradient Optimization Techniques
  • Indoor and Outdoor Localization Technologies
  • Model Reduction and Neural Networks
  • Advanced Image Processing Techniques
  • Spine and Intervertebral Disc Pathology
  • Dam Engineering and Safety
  • Liver Disease Diagnosis and Treatment
  • Medical Image Segmentation Techniques
  • Advanced Vision and Imaging
  • Hepatitis C virus research
  • Liver Disease and Transplantation
  • Osteoarthritis Treatment and Mechanisms
  • Target Tracking and Data Fusion in Sensor Networks
  • Photoacoustic and Ultrasonic Imaging
  • Tendon Structure and Treatment
  • Microbial Applications in Construction Materials
  • Reinforcement Learning in Robotics
  • Extracellular vesicles in disease

Jiangsu University
2018-2025

Alibaba Group (United States)
2022-2024

Bellevue Hospital Center
2022-2024

First Affiliated Hospital of Anhui Medical University
2023-2024

Anhui Medical University
2023-2024

Wannan Medical College
2022-2023

First Affiliated Hospital of Wannan Medical College
2022-2023

Alibaba Group (Cayman Islands)
2022

University of California, Los Angeles
2017-2021

Renmin University of China
2021

This paper considers the minimization of a general objective function $f(X)$ over set rectangular $n\times m$ matrices that have rank at most $r$. To reduce computational burden, we factorize variable $X$ into product two smaller and optimize these instead $X$. Despite resulting nonconvexity, recent studies in matrix completion sensing shown factored problem has no spurious local minima obeys so-called strict saddle property (the directional negative curvature all critical points but...

10.1109/tsp.2018.2835403 article EN publisher-specific-oa IEEE Transactions on Signal Processing 2018-05-10

Abstract This work considers two popular minimization problems: (i) the of a general convex function f(X) with domain being positive semi-definite matrices, and (ii) regularized by matrix nuclear norm $\|X\|_{*}$ matrices. Despite their optimal statistical performance in literature, these optimization problems have high computational complexity even when solved using tailored fast solvers. To develop faster more scalable algorithms, we follow proposal Burer Monteiro to factor low-rank...

10.1093/imaiai/iay003 article EN Information and Inference A Journal of the IMA 2018-02-08

This paper deals with alternating optimization of sensing matrix and sparsifying dictionary for compressed systems. Under the same framework proposed by J. M. Duarte-Carvajalino G. Sapiro, a novel algorithm optimal design is derived an optimized embedded. A closed-form solution to problem obtained. new measure optimizing developed solving corresponding problem. Experiments are carried out synthetic data real images, which demonstrate promising performance algorithms superiority CS system...

10.1109/tsp.2015.2399864 article EN IEEE Transactions on Signal Processing 2015-02-03

A promising trend in deep learning replaces traditional feedforward networks with implicit networks. Unlike networks, solve a fixed point equation to compute inferences. Solving for the varies complexity, depending on provided data and an error tolerance. Importantly, may be trained memory costs stark contrast whose requirements scale linearly depth. However, there is no free lunch --- backpropagation through often requires solving costly Jacobian-based arising from function theorem. We...

10.1609/aaai.v36i6.20619 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2022-06-28

10.1016/j.acha.2018.09.005 article EN publisher-specific-oa Applied and Computational Harmonic Analysis 2018-09-22

This paper considers general rank-constrained optimization problems that minimize a objective function <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${f}( {X})$ </tex-math></inline-formula> over the set of rectangular notation="LaTeX">${n}\times {m}$ matrices have rank at most r. To tackle constraint and also to reduce computational burden, we factorize notation="LaTeX">$ {X}$ into {U} {V} ^{\mathrm {T}}$...

10.1109/tit.2021.3049171 article EN publisher-specific-oa IEEE Transactions on Information Theory 2021-01-05

Our study offers a quantitative framework for microbial-induced calcium carbonate precipitation (MICP) to uplift the properties of recycled aggregate concrete (RAC). In this regard, marine alkalophilic bacterium Bacillus sp. B6 was employed, and its growth mineralization efficiency under seawater conditions investigated. Optimization MICP achieved with different nutrient sources bacterial introduction methods (dip spray). The treated aggregates (RA) determined by using scanning electron...

10.3389/fmats.2023.1131673 article EN cc-by Frontiers in Materials 2023-02-20

This paper considers general rank-constrained optimization problems that minimize a objective function $f(X)$ over the set of rectangular $n\times m$ matrices have rank at most $r$. To tackle constraint and also to reduce computational burden, we factorize $X$ into $UV^T$ where $U$ $V$ are r$ $m\times matrices, respectively, then optimize small $V$. We characterize global geometry nonconvex factored problem show corresponding satisfies robust strict saddle property as long original $f$...

10.48550/arxiv.1703.01256 preprint EN other-oa arXiv (Cornell University) 2017-01-01

This paper considers the minimization of a general objective function f (X) over set non-square n × m matrices where optimal solution X* is low-rank. To reduce computational burden, we factorize variable X into product two smaller and optimize these instead X. We analyze global geometry for yet well-conditioned whose restricted strong convexity smoothness constants are comparable. In particular, show that reformulated has no spurious local minima obeys strict saddle property. These geometric...

10.1109/globalsip.2017.8309166 article EN 2017-11-01

The (global) Lipschitz smoothness condition is crucial in establishing the convergence theory for most optimization methods. Unfortunately, machine learning and signal processing problems are not smooth. This motivates us to generalize concept of relative condition, which satisfied by any finite-order polynomial objective function. Further, this work develops new Bregman-divergence based algorithms that guaranteed converge a second-order stationary point relatively smooth problem. In...

10.48550/arxiv.1904.09712 preprint EN other-oa arXiv (Cornell University) 2019-01-01

This work investigates the geometry of a nonconvex reformulation minimizing general convex loss function $f(X)$ regularized by matrix nuclear norm $\|X\|_*$. Nuclear-norm inverse problems are at heart many applications in machine learning, signal processing, and control. The statistical performance regularization has been studied extensively literature using analysis techniques. Despite its optimal performance, resulting optimization high computational complexity when solved standard or even...

10.48550/arxiv.1704.01265 preprint EN other-oa arXiv (Cornell University) 2017-01-01

This work considers the minimization of a general convex function f (X) over cone positive semi-definite matrices whose optimal solution X* is low-rank. Standard first-order solvers require performing an eigenvalue decomposition in each iteration, severely limiting their scalability. A natural nonconvex reformulation problem factors variable X into product rectangular matrix with fewer columns and its transpose. For special class sensing completion problems quadratic objective functions,...

10.1109/globalsip.2017.8309158 article EN 2017-11-01

This work investigates the parameter estimation performance of line spectral estimation/super-resolution using atomic norm minimization. The focus is on analyzing algorithm's accuracy inferring frequencies and complex magnitudes from noisy observations. When Signal-to-Noise Ratio reasonably high true are separated by O(1/n), estimator shown to localize correct number frequencies, each within a neighborhood size O(√log n/n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML"...

10.1109/globalsip.2016.7905822 article EN 2016-12-01

Based on the maximum likelihood estimation principle, we derive a collaborative framework that fuses several different estimators and yields better estimate. Applying it to compressive sensing (CS), propose CS (CCS) scheme consisting of bank $K$ systems share same matrix but have sparsifying dictionaries. This CCS system is expected yield performance than each individual system, while requiring time as needed for when parallel computing strategy used. We then provide an approach designing...

10.1137/17m1148426 article EN SIAM Journal on Imaging Sciences 2018-01-01

This work develops theories and computational methods for overcomplete, non-orthogonal tensor decomposition using convex optimization. Under an incoherence condition of the rank-one factors, we show that one can retrieve by solving a convex, infinite-dimensional analog ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> minimization on space measures. The optimal value this optimization defines nuclear norm. Two schemes are proposed to...

10.1109/camsap.2015.7383734 article EN 2015-12-01

Low-rank matrix recovery is a fundamental problem in signal processing and machine learning. A recent very popular approach to recovering low-rank X factorize it as product of two smaller matrices, i.e., = UV <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</sup> , then optimize over U, V instead X. Despite the resulting non-convexity, results have shown that many factorized objective functions actually benign global geometry-with no spurious...

10.1109/lsp.2020.3008876 article EN publisher-specific-oa IEEE Signal Processing Letters 2020-01-01

Symmetric nonnegative matrix factorization (NMF), a special but important class of the general NMF, is demonstrated to be useful for data analysis and in particular various clustering tasks. Unfortunately, designing fast algorithms NMF not as easy nonsymmetric counterpart, latter admitting splitting property that allows efficient alternating-type algorithms. To overcome this issue, we transfer symmetric one, then can adopt idea from state-of-the-art design solving NMF. We rigorously...

10.48550/arxiv.1811.05642 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Gene selection is one of the critical steps in course classification microarray data. Since particle swarm optimization has no complicated evolutionary operators and fewer parameters need to be adjusted, it been used increasingly as an effective technique for gene selection. apt converge local minima which lead premature convergence, some based methods may select non-optimal genes with high probability. To predictive low redundancy well not filtering out key still a challenge. obtain lower...

10.1186/s12859-019-2773-x article EN cc-by BMC Bioinformatics 2019-06-01

Abstract The standard simplex in $\mathbb{R}^{n}$, also known as the probability simplex, is set of nonnegative vectors whose entries sum up to 1. It frequently appears a constraint optimization problems that arise machine learning, statistics, data science, operations research and beyond. We convert unit sphere thus transform corresponding constrained problem into an on simple, smooth manifold. show Karush-Kuhn-Tucker points strict-saddle minimization all correspond those transformed...

10.1093/imaiai/iaad017 article EN Information and Inference A Journal of the IMA 2023-04-27
Coming Soon ...