- Sparse and Compressive Sensing Techniques
- Numerical methods in inverse problems
- Statistical Methods and Inference
- Neural Networks and Applications
- Distributed Sensor Networks and Detection Algorithms
- Face and Expression Recognition
- Stochastic Gradient Optimization Techniques
- Image and Signal Denoising Methods
- Advanced Algorithms and Applications
- Microwave Imaging and Scattering Analysis
- Control Systems and Identification
- Model Reduction and Neural Networks
- Machine Learning and Algorithms
- Fault Detection and Control Systems
- Blind Source Separation Techniques
- Matrix Theory and Algorithms
- Numerical methods in engineering
- Network Security and Intrusion Detection
- Bayesian Methods and Mixture Models
- Anomaly Detection Techniques and Applications
- Bayesian Modeling and Causal Inference
- Photoacoustic and Ultrasonic Imaging
- Cardiac electrophysiology and arrhythmias
- Flavonoids in Medical Research
- Diverse Scientific and Engineering Research
Fudan University
2014-2024
Hong Kong Baptist University
2024
North China Electric Power University
2015
Guangxi University
2015
KU Leuven
2013-2014
iMinds
2014
City University of Hong Kong
2009-2011
University of Science and Technology of China
2009-2010
Suzhou University of Science and Technology
2010
Zhongyuan University of Technology
2009
Within the statistical learning framework, this paper studies regression model associated with correntropy induced losses. The correntropy, as a similarity measure, has been frequently employed in signal processing and pattern recognition. Motivated by its empirical successes, aims at presenting some theoretical understanding towards maximum criterion problems. Our focus is two-fold: first, we are concerned connections between loss least squares model. Second, study convergence property. A...
The ramp loss is a robust but non-convex for classification. Compared with other losses, local minimum of the can be effectively found. effectiveness search comes from piecewise linearity loss. Motivated by fact that l1-penalty linear as well, applied loss, resulting in programming support vector machine (ramp-LPSVM). proposed ramp-LPSVM minimization problem and related optimization techniques are applicable. Moreover, enhance sparsity. In this paper, corresponding misclassification error...
Noise-inclusive fully unsupervised anomaly detection (FUAD) holds significant practical relevance. Although various methods exist to address this problem, they are limited in both performance and scalability. Our work seeks overcome these obstacles, enabling broader adaptability of (UAD) models FUAD. To achieve this, we introduce the Synergy Scoring Filter (SSFilter), first approach leverage sample-level filtering. SSFilter facilitates end-to-end robust training applies filtering complete...
Abstract By selecting different filter functions, spectral algorithms can generate various regularization methods to solve statistical inverse problems within the learning-from-samples framework. This paper combines distributed with Sobolev kernels tackle functional linear regression problem. The design and mathematical analysis of require only that covariates are observed at discrete sample points. Furthermore, hypothesis function spaces generated by kernels, optimizing both approximation...
Applying the pinball loss in a support vector machine (SVM) classifier results pin-SVM. The is characterized by parameter τ . Its value related to quantile level and different values are suitable for problems. In this paper, we establish an algorithm find entire solution path pin-SVM with values. This based on fact that optimal continuous piecewise linear respect We also show nonnegativity constraint not necessary, i.e., can be extended negative First, some applications, leads better...
The ranking problem aims at learning real-valued functions to order instances, which has attracted great interest in statistical theory. In this paper, we consider the regularized least squares algorithm within framework of reproducing kernel Hilbert space. particular, focus on analysis generalization error for algorithm, and improve existing rates by virtue an decomposition technique from regression Hoeffding’s U-statistics.
In this paper, we study the gradient descent algorithm generated by a robust loss function over reproducing kernel Hilbert space (RKHS). The is defined windowing G and scale parameter σ, which can include wide range of commonly used losses for regression. There still gap between theoretical analysis optimization process empirical risk minimization based on loss: estimator needs to be global optimal in while method not ensure optimality its solutions. aim fill developing novel performance...
Abstract The main contribution of this paper is the derivation non-asymptotic convergence rates for Nyström kernel canonical correlation analysis (CCA) in a setting statistical learning. Our theoretical results reveal that, under certain conditions, CCA can achieve rate comparable to that standard CCA, while offering significant computational savings. This finding has important implications practical application particularly scenarios where efficiency crucial. Numerical experiments are...
In recent years there has been massive interest in precision medicine, which aims to tailor treatment plans the individual characteristics of each patient. This paper studies estimation individualized rules (ITR) based on functional predictors such as images or spectra. We consider a reproducing kernel Hilbert space (RKHS) approach learn optimal ITR maximizes expected clinical outcome. The algorithm can be conveniently implemented although it involves infinite-dimensional data. provide...
We investigate the distributed learning with coefficient-based regularization scheme under framework of kernel regression methods. Compared classical ridge (KRR), algorithm consideration does not require function to be positive semi-definite and hence provides a simple paradigm for designing indefinite The approach partitions massive data set into several disjoint subsets, then produces global estimator by taking an average local on each subset. Easy exercisable performing subset in parallel...