- Sparse and Compressive Sensing Techniques
- Quantum Computing Algorithms and Architecture
- Quantum Information and Cryptography
- Blind Source Separation Techniques
- Indoor and Outdoor Localization Technologies
- Stochastic Gradient Optimization Techniques
- Microwave Imaging and Scattering Analysis
- Bayesian Modeling and Causal Inference
- Direction-of-Arrival Estimation Techniques
- Neural Networks and Applications
- Advanced Neural Network Applications
- Wireless Communication Security Techniques
- Speech and Audio Processing
- Tensor decomposition and applications
- Statistical Methods and Inference
- Neural Networks and Reservoir Computing
- Machine Learning and ELM
- Numerical methods in inverse problems
- Photoacoustic and Ultrasonic Imaging
- Mathematical Approximation and Integration
- Domain Adaptation and Few-Shot Learning
- Distributed Sensor Networks and Detection Algorithms
- Adversarial Robustness in Machine Learning
- Image and Signal Denoising Methods
- Time Series Analysis and Forecasting
Capital One (United States)
2025
Jacksonville University
2024
University of Florida
2024
Wichita State University
2016-2021
IBM Research - Thomas J. Watson Research Center
2014-2020
Boeing (Australia)
2020
Boeing (United States)
2020
Emory University
2020
Massachusetts Institute of Technology
2010-2016
Observatoire de la Côte d’Azur
2013
This paper presents a novel iterative greedy reconstruction algorithm for practical compressed sensing (CS), called the sparsity adaptive matching pursuit (SAMP). Compared with other state-of-the-art algorithms, most innovative feature of SAMP is its capability signal without prior information sparsity. makes it promising candidate many applications when number non-zero (significant) coefficients not available. The proposed adopts similar flavor EM algorithm, which alternatively estimates...
This paper introduces a new framework of fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we pre-randomize signal by scrambling its samples or flipping sample signs then fast-transform randomized finally, subsample transform coefficients as final measurements. SRM is highly relevant large-scale, real-time applications it has computation supports block-based processing. addition, can show that...
This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) - solution for Coding (DVC) based on the (CS) theory. The DISCOS compressively samples each video frame independently at encoder and recovers frames jointly decoder by exploiting an interframe sparsity model performing sparse recovery with side information. Simulation results show that significantly outperforms baseline CS-based scheme of intraframe-coding intraframe-decoding. Moreover, our can perform...
This paper studies the problem of accurately recovering a <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> -sparse vector β <sup xmlns:xlink="http://www.w3.org/1999/xlink">*</sup> ∈ \BBR xmlns:xlink="http://www.w3.org/1999/xlink">p</i> from highly corrupted linear measurements xmlns:xlink="http://www.w3.org/1999/xlink">y</i> = xmlns:xlink="http://www.w3.org/1999/xlink">X</i> + xmlns:xlink="http://www.w3.org/1999/xlink">e</i>...
We introduce and analyze a new technique for model reduction deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting redundancy negatively affects the prediction accuracy variance. Our Net-Trim algorithm prunes (sparsifies) trained network layer-wise, removing connections at each layer by solving convex optimization program. This program seeks sparse set weights that keeps inputs outputs consistent with originally model. The...
This paper confirms a surprising phenomenon first observed by Wright under different setting: given <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</i> highly corrupted measurements xmlns:xlink="http://www.w3.org/1999/xlink">y</i> = xmlns:xlink="http://www.w3.org/1999/xlink">A</i> <sub xmlns:xlink="http://www.w3.org/1999/xlink">Ω·</sub> xmlns:xlink="http://www.w3.org/1999/xlink">x*</i> + xmlns:xlink="http://www.w3.org/1999/xlink">e</i> <sup...
The coronavirus disease 2019 (COVID-19) is a pandemic.1Parikh S.R. Bly R.A. Bonilla-Velez J. et al.Pediatric otolaryngology divisional and institutional preparatory response at Seattle Children's Hospital after COVID-19 regional exposure.doi:1177/0194599820919748Google Scholar concentrates in the upper airway mucosa2Zou L. Ruan F. Huang M. al.SARS-CoV-2 viral load respiratory specimens of infected patients.N Engl J Med. 2020; 382: 1177Crossref PubMed Scopus (3423) Google Scholar; thus,...
We consider the problem of robustly recovering a $k$-sparse coefficient vector from Fourier series that it generates, restricted to interval $[- \Omega, \Omega]$. The difficulty this is linked superresolution factor SRF, equal ratio Rayleigh length (inverse $\Omega$) by spacing grid supporting sparse vector. In presence additive deterministic noise norm $\sigma$, we show upper and lower bounds on minimax error rate both scale like $(SRF)^{2k-1} \sigma$, providing partial answer question...
This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) - solution for Coding (DVC) based on the recently emerging theory. The DISCOS compressively samples each video frame independently at encoder. However, it recovers frames jointly decoder by exploiting an interframe sparsity model and performing sparse recovery with side information. In particular, along global frame-based measurements, encoder also acquires local block-based measurements block...
The power of quantum computers is still somewhat speculative. Although they are certainly faster than classical ones at some tasks, the class problems can efficiently solve has not been mapped definitively onto known complexity theory. This means that we do know for which calculations there will be a "quantum advantage," once an algorithm found. One way to answer question find those algorithms, but finding truly algorithms turns out very difficult. In previous work, over past three decades,...
The low-rank matrix approximation problem involves finding of a rank k version m x n A, labeled Ak, such that Ak is as "close" possible to the best SVD A at same level. Previous approaches approximate by non-uniformly adaptive sampling some columns (or rows) hoping this subset contain enough information about A. sub-matrix then used for process. However, these are often computationally intensive due complexity in sampling. In paper, we propose fast and efficient algorithm which first...
Given an order-|$d$| tensor |${\mathcal {A}} \in {\mathbb {R}}^{n \times n \ldots n}$|, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of {A}}$|, keeps large {A}}$| and retains some the remaining with probabilities proportional to square their magnitudes. We analyze approximation accuracy proposed using powerful inequality derive. This bounds spectral norm random is independent interest. As result, obtain novel for problem.
We present a pursuit-like algorithm that we call the superset method for recovery of sparse vectors from consecutive Fourier measurements in super-resolution regime. The has subspace identification step hinges on translation invariance transform, followed by removal to estimate solution's support. is always successful noiseless regime (unlike L1-minimization) and generalizes higher dimensions matrix pencil method). Relative robustness noise demonstrated numerically.
In this paper, we propose a general collaborative sparse representation framework for multi-sensor classification, which takes into account the correlations as well complementary information between heterogeneous sensors simultaneously while considering joint sparsity within each sensor's observations. We also robustify our models to deal with presence of noise and low-rank interference signals. Specifically, demonstrate that incorporating or signal component in is essential classification...
Large pre-trained models for zero/few-shot learning excel in language and vision domains but encounter challenges multivariate time series (TS) due to the diverse nature scarcity of publicly available pre-training data. Consequently, there has been a recent surge utilizing large (LLMs) with token adaptations TS forecasting. These approaches employ cross-domain transfer surprisingly yield impressive results. However, these are typically very slow (~billion parameters) do not consider...
It has been empirically observed that the flatness of minima obtained from training deep networks seems to correlate with better generalization. However, for positively homogeneous activations, most measures sharpness/flatness are not invariant rescaling network parameters, corresponding same function. This means measure flatness/sharpness can be made as small or large possible through rescaling, rendering quantitative meaningless. In this paper we show homogenous these rescalings constitute...
Motivated by recent work on stochastic gradient descent methods, we develop two variants of greedy algorithms for possibly non-convex optimization problems with sparsity constraints. We prove linear convergence in expectation to the solution within a specified tolerance. This generalized framework applies such as sparse signal recovery compressed sensing, low-rank matrix recovery, and covariance estimation, giving methods provable guarantees that often outperform their deterministic...
Quantum annealing (QA) is a quantum computing algorithm that works on the principle of Adiabatic Computation (AQC), and it has shown significant computational advantages in solving combinatorial optimization problems such as vehicle routing (VRP) when compared to classical algorithms. This paper presents QA approach for variant VRP known multi-depot capacitated problem (MDCVRP). an NP-hard with real-world applications fields transportation, logistics, supply chain management. We consider...