Nam H. Nguyen

ORCID: 0000-0002-1254-0069
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Sparse and Compressive Sensing Techniques
  • Quantum Computing Algorithms and Architecture
  • Quantum Information and Cryptography
  • Blind Source Separation Techniques
  • Indoor and Outdoor Localization Technologies
  • Stochastic Gradient Optimization Techniques
  • Microwave Imaging and Scattering Analysis
  • Bayesian Modeling and Causal Inference
  • Direction-of-Arrival Estimation Techniques
  • Neural Networks and Applications
  • Advanced Neural Network Applications
  • Wireless Communication Security Techniques
  • Speech and Audio Processing
  • Tensor decomposition and applications
  • Statistical Methods and Inference
  • Neural Networks and Reservoir Computing
  • Machine Learning and ELM
  • Numerical methods in inverse problems
  • Photoacoustic and Ultrasonic Imaging
  • Mathematical Approximation and Integration
  • Domain Adaptation and Few-Shot Learning
  • Distributed Sensor Networks and Detection Algorithms
  • Adversarial Robustness in Machine Learning
  • Image and Signal Denoising Methods
  • Time Series Analysis and Forecasting

Capital One (United States)
2025

Jacksonville University
2024

University of Florida
2024

Wichita State University
2016-2021

IBM Research - Thomas J. Watson Research Center
2014-2020

Boeing (Australia)
2020

Boeing (United States)
2020

Emory University
2020

Massachusetts Institute of Technology
2010-2016

Observatoire de la Côte d’Azur
2013

This paper presents a novel iterative greedy reconstruction algorithm for practical compressed sensing (CS), called the sparsity adaptive matching pursuit (SAMP). Compared with other state-of-the-art algorithms, most innovative feature of SAMP is its capability signal without prior information sparsity. makes it promising candidate many applications when number non-zero (significant) coefficients not available. The proposed adopts similar flavor EM algorithm, which alternatively estimates...

10.1109/acssc.2008.5074472 article EN 2018 52nd Asilomar Conference on Signals, Systems, and Computers 2008-10-01

This paper introduces a new framework of fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we pre-randomize signal by scrambling its samples or flipping sample signs then fast-transform randomized finally, subsample transform coefficients as final measurements. SRM is highly relevant large-scale, real-time applications it has computation supports block-based processing. addition, can show that...

10.1109/tsp.2011.2170977 article EN IEEE Transactions on Signal Processing 2011-10-12

This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) - solution for Coding (DVC) based on the (CS) theory. The DISCOS compressively samples each video frame independently at encoder and recovers frames jointly decoder by exploiting an interframe sparsity model performing sparse recovery with side information. Simulation results show that significantly outperforms baseline CS-based scheme of intraframe-coding intraframe-decoding. Moreover, our can perform...

10.1109/ciss.2009.5054678 article EN 2009-03-01

This paper studies the problem of accurately recovering a <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> -sparse vector β <sup xmlns:xlink="http://www.w3.org/1999/xlink">*</sup> ∈ \BBR xmlns:xlink="http://www.w3.org/1999/xlink">p</i> from highly corrupted linear measurements xmlns:xlink="http://www.w3.org/1999/xlink">y</i> = xmlns:xlink="http://www.w3.org/1999/xlink">X</i> + xmlns:xlink="http://www.w3.org/1999/xlink">e</i>...

10.1109/tit.2012.2232347 article EN IEEE Transactions on Information Theory 2012-12-08

We introduce and analyze a new technique for model reduction deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting redundancy negatively affects the prediction accuracy variance. Our Net-Trim algorithm prunes (sparsifies) trained network layer-wise, removing connections at each layer by solving convex optimization program. This program seeks sparse set weights that keeps inputs outputs consistent with originally model. The...

10.48550/arxiv.1611.05162 preprint EN other-oa arXiv (Cornell University) 2016-01-01

This paper confirms a surprising phenomenon first observed by Wright under different setting: given <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</i> highly corrupted measurements xmlns:xlink="http://www.w3.org/1999/xlink">y</i> = xmlns:xlink="http://www.w3.org/1999/xlink">A</i> <sub xmlns:xlink="http://www.w3.org/1999/xlink">Ω·</sub> xmlns:xlink="http://www.w3.org/1999/xlink">x*</i> + xmlns:xlink="http://www.w3.org/1999/xlink">e</i> <sup...

10.1109/tit.2013.2240435 article EN IEEE Transactions on Information Theory 2013-01-16

The coronavirus disease 2019 (COVID-19) is a pandemic.1Parikh S.R. Bly R.A. Bonilla-Velez J. et al.Pediatric otolaryngology divisional and institutional preparatory response at Seattle Children's Hospital after COVID-19 regional exposure.doi:1177/0194599820919748Google Scholar concentrates in the upper airway mucosa2Zou L. Ruan F. Huang M. al.SARS-CoV-2 viral load respiratory specimens of infected patients.N Engl J Med. 2020; 382: 1177Crossref PubMed Scopus (3423) Google Scholar; thus,...

10.1016/j.joms.2020.04.040 article EN other-oa Journal of Oral and Maxillofacial Surgery 2020-05-01

We consider the problem of robustly recovering a $k$-sparse coefficient vector from Fourier series that it generates, restricted to interval $[- \Omega, \Omega]$. The difficulty this is linked superresolution factor SRF, equal ratio Rayleigh length (inverse $\Omega$) by spacing grid supporting sparse vector. In presence additive deterministic noise norm $\sigma$, we show upper and lower bounds on minimax error rate both scale like $(SRF)^{2k-1} \sigma$, providing partial answer question...

10.48550/arxiv.1502.01385 preprint EN other-oa arXiv (Cornell University) 2015-01-01

This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) - solution for Coding (DVC) based on the recently emerging theory. The DISCOS compressively samples each video frame independently at encoder. However, it recovers frames jointly decoder by exploiting an interframe sparsity model and performing sparse recovery with side information. In particular, along global frame-based measurements, encoder also acquires local block-based measurements block...

10.1109/icip.2009.5414631 article EN 2009-11-01

The power of quantum computers is still somewhat speculative. Although they are certainly faster than classical ones at some tasks, the class problems can efficiently solve has not been mapped definitively onto known complexity theory. This means that we do know for which calculations there will be a "quantum advantage," once an algorithm found. One way to answer question find those algorithms, but finding truly algorithms turns out very difficult. In previous work, over past three decades,...

10.1109/tnnls.2019.2933394 article EN IEEE Transactions on Neural Networks and Learning Systems 2019-01-01

10.1109/icassp49660.2025.10889449 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025-03-12

The low-rank matrix approximation problem involves finding of a rank k version m x n A, labeled Ak, such that Ak is as "close" possible to the best SVD A at same level. Previous approaches approximate by non-uniformly adaptive sampling some columns (or rows) hoping this subset contain enough information about A. sub-matrix then used for process. However, these are often computationally intensive due complexity in sampling. In paper, we propose fast and efficient algorithm which first...

10.1145/1536414.1536446 article EN 2009-05-31

Given an order-|$d$| tensor |${\mathcal {A}} \in {\mathbb {R}}^{n \times n \ldots n}$|⁠, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of {A}}$|⁠, keeps large {A}}$| and retains some the remaining with probabilities proportional to square their magnitudes. We analyze approximation accuracy proposed using powerful inequality derive. This bounds spectral norm random is independent interest. As result, obtain novel for problem.

10.1093/imaiai/iav004 article EN Information and Inference A Journal of the IMA 2015-05-12

We present a pursuit-like algorithm that we call the superset method for recovery of sparse vectors from consecutive Fourier measurements in super-resolution regime. The has subspace identification step hinges on translation invariance transform, followed by removal to estimate solution's support. is always successful noiseless regime (unlike L1-minimization) and generalizes higher dimensions matrix pencil method). Relative robustness noise demonstrated numerically.

10.5281/zenodo.54360 preprint EN arXiv (Cornell University) 2013-09-09

In this paper, we propose a general collaborative sparse representation framework for multi-sensor classification, which takes into account the correlations as well complementary information between heterogeneous sensors simultaneously while considering joint sparsity within each sensor's observations. We also robustify our models to deal with presence of noise and low-rank interference signals. Specifically, demonstrate that incorporating or signal component in is essential classification...

10.1109/tsp.2016.2521605 article EN IEEE Transactions on Signal Processing 2016-01-25

Large pre-trained models for zero/few-shot learning excel in language and vision domains but encounter challenges multivariate time series (TS) due to the diverse nature scarcity of publicly available pre-training data. Consequently, there has been a recent surge utilizing large (LLMs) with token adaptations TS forecasting. These approaches employ cross-domain transfer surprisingly yield impressive results. However, these are typically very slow (~billion parameters) do not consider...

10.48550/arxiv.2401.03955 preprint EN cc-by-nc-nd arXiv (Cornell University) 2024-01-01

It has been empirically observed that the flatness of minima obtained from training deep networks seems to correlate with better generalization. However, for positively homogeneous activations, most measures sharpness/flatness are not invariant rescaling network parameters, corresponding same function. This means measure flatness/sharpness can be made as small or large possible through rescaling, rendering quantitative meaningless. In this paper we show homogenous these rescalings constitute...

10.48550/arxiv.1902.02434 preprint EN cc-by-nc-sa arXiv (Cornell University) 2019-01-01

Motivated by recent work on stochastic gradient descent methods, we develop two variants of greedy algorithms for possibly non-convex optimization problems with sparsity constraints. We prove linear convergence in expectation to the solution within a specified tolerance. This generalized framework applies such as sparse signal recovery compressed sensing, low-rank matrix recovery, and covariance estimation, giving methods provable guarantees that often outperform their deterministic...

10.48550/arxiv.1407.0088 preprint EN other-oa arXiv (Cornell University) 2014-01-01

Quantum annealing (QA) is a quantum computing algorithm that works on the principle of Adiabatic Computation (AQC), and it has shown significant computational advantages in solving combinatorial optimization problems such as vehicle routing (VRP) when compared to classical algorithms. This paper presents QA approach for variant VRP known multi-depot capacitated problem (MDCVRP). an NP-hard with real-world applications fields transportation, logistics, supply chain management. We consider...

10.48550/arxiv.2005.12478 preprint EN other-oa arXiv (Cornell University) 2020-01-01
Coming Soon ...