- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Advanced Malware Detection Techniques
- Stochastic Gradient Optimization Techniques
- Topic Modeling
- Privacy-Preserving Technologies in Data
- Complex Network Analysis Techniques
- Natural Language Processing Techniques
- Software Testing and Debugging Techniques
- Advanced Wireless Network Optimization
- Opinion Dynamics and Social Influence
- Advanced Neural Network Applications
- Nonlinear Dynamics and Pattern Formation
- Reinforcement Learning in Robotics
- Network Traffic and Congestion Control
- Modular Robots and Swarm Intelligence
- Domain Adaptation and Few-Shot Learning
- Wireless Communication Networks Research
- Micro and Nano Robotics
- Advanced ceramic materials synthesis
- Fuzzy Systems and Optimization
- Advanced Memory and Neural Computing
- Supply Chain Resilience and Risk Management
- Transportation and Mobility Innovations
- Advanced Text Analysis Techniques
Harbin Institute of Technology
2020-2025
Hong Kong University of Science and Technology
2000-2024
University of Hong Kong
2000-2024
Tsinghua Sichuan Energy Internet Research Institute
2024
Tsinghua University
2019-2024
Beihang University
2018-2024
Yunnan University
2024
Aston University
2020-2023
Inner Mongolia University
2023
University of Illinois Urbana-Champaign
2019-2022
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by algorithms. In this paper, we perform first systematic study of poisoning attacks their countermeasures linear regression models. attacks, deliberately influence training data a predictive model. We propose theoretically-grounded optimization framework specifically designed demonstrate its effectiveness on range datasets also introduce fast...
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added input. Given emerging physical systems using DNNs in safety-critical situations, examples could mislead these and cause dangerous situations.Therefore, understanding world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), generate...
The past decade has seen the great potential of applying deep neural network (DNN) based software to safety-critical scenarios, such as autonomous driving. Similar traditional software, DNNs could exhibit incorrect behaviors, caused by hidden defects, leading severe accidents and losses. In this paper, we propose DeepHunter, a coverage-guided fuzz testing framework for detecting defects general-purpose DNNs. To end, first metamorphic mutation strategy generate new semantically preserved...
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs. To make DL more robust, several posthoc (or runtime) anomaly detection techniques detect (and discard) these anomalous samples have been proposed the recent past. This survey tries provide a structured comprehensive overview of research on for based applications. We taxonomy existing their underlying assumptions adopted approaches. discuss various each categories relative...
Deep learning (DL) models are inherently vulnerable to adversarial examples - maliciously crafted inputs trigger target DL misbehave which significantly hinders the application of in security-sensitive domains. Intensive research on has led an arms race between adversaries and defenders. Such plethora emerging attacks defenses raise many questions: Which more evasive, preprocessing-proof, or transferable? effective, utility-preserving, general? Are ensembles multiple robust than individuals?...
In many practical applications, it is often difficult and expensive to obtain enough large-scale labeled data train deep neural networks their full capability. Therefore, transferring the learned knowledge from a separate, source domain an unlabeled or sparsely target becomes appealing alternative. However, direct transfer results in significant performance decay due shift. Domain adaptation (DA) addresses this problem by minimizing impact of shift between domains. Multi-source (MDA)...
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks, therefore sophisticated evaluation of their robustness is great importance. However, evaluating the under worst-case scenarios based on known attacks not comprehensive, mention that some them even rarely occur in real world. Also, distribution safety-critical data usually multimodal, while most traditional and methods focus a single modality. To solve above challenges, we propose...
Although machine learning (ML) techniques are increasingly popular in water resource studies, they not extensively utilized modeling snowmelt. In this study, we developed a model based on deep long short-term memory (LSTM) for snowmelt-driven discharge Himalayan basin. For comparison, the nonlinear autoregressive exogenous (NARX), Gaussian process regression (GPR), and support vector (SVR) models. The snow area derived from moderate resolution imaging spectroradiometer (MODIS) images along...
Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices. However, the aggregation process federated is highly vulnerable to adversarial attacks so that global model may behave abnormally under attacks. To tackle this challenge, we present novel algorithm with residual-based reweighting defend learning. Our combines repeated median regression scheme iteratively reweighted least squares. experiments show our...
Accurate classification of electrocardiogram (ECG) signals is crucial for automatic diagnosis heart diseases. However, existing ECG methods often require complex preprocessing and denoising operations, traditional convolutional neural network (CNN)-based struggle to capture relationships high-level time-series features.
In data parallel frameworks such as MapReduce and Spark, a coflow represents set of network flows used to transfer intermediate between successive computation stages for job. The completion time job is then determined by the collective behavior coflow, rather than any individual flow within, influenced amount bandwidth allocated it. Different jobs in shared cluster have different degrees sensitivity their times, modeled respective utility functions. this paper, we focus on design...
Gradient-based meta-learning algorithms have gained popularity for their ability to train models on new tasks using limited data. Empirical observations indicate that such are able learn a shared representation across tasks, which is regarded as key factor in success. However, the in-depth theoretical understanding of learning dynamics and origin remains underdeveloped. In this work, we investigate nonlinear two-layer neural networks trained streaming teacher-student scenario. Through lens...
We present a new approach for investigating the Markovian to non-Markovian transition in quantum aggregates strongly coupled vibrational bath through analysis of linear absorption spectra. Utilizing hierarchical algebraic equations frequency domain, we elucidate how these spectra can effectively reveal transitions between and regimes, driven by complex interplay dissipation, aggregate–bath coupling, intra-aggregate dipole–dipole interactions. Our results demonstrate that reduced dissipation...
Deep neural networks (DNN) are known to be vulnerable adversarial attacks. Numerous efforts either try patch weaknesses in trained models, or make it difficult costly compute examples that exploit them. In our work, we explore a new "honeypot" approach protect DNN models. We intentionally inject trapdoors, honeypot the classification manifold attract attackers searching for examples. Attackers' optimization algorithms gravitate towards leading them produce attacks similar trapdoors feature...
Recent success of deep neural networks (DNNs) hinges on the availability large-scale dataset; however, training such dataset often poses privacy risks for sensitive information. In this paper, we aim to explore power generative models and gradient sparsity, propose a scalable privacy-preserving model DataLens, which is able generate synthetic data in differentially private (DP) way given input data. Thus, it possible train different down-stream tasks with generated while protecting...
Deep neural networks (DNNs) are vulnerable to adversarial examples, which crafted by adding imperceptible perturbations inputs. Recently different attacks and strategies have been proposed, but how generate examples perceptually realistic more efficiently remains unsolved. This paper proposes a novel framework called Attack-Inspired GAN (AI-GAN), where generator, discriminator, an attacker trained jointly. Once trained, it can given input images target classes. Through extensive experiments...
Maintaining the stability of synchronization state is crucial for functioning many natural and artificial systems. In this study, we develop methods to optimize Kuramoto model by minimizing dominant Lyapunov exponent. Using recently proposed cut-set space approximation steady states, greatly simplify objective function, further derive its gradient Hessian with respect frequencies, which leads an efficient algorithm quasi-Newton's method. The optimized systems are demonstrated achieve better...
In company with the data explosion over past decade, deep neural network (DNN) based software has experienced unprecedented leap and is becoming key driving force of many novel industrial applications, including safety-critical scenarios such as autonomous driving. Despite great success achieved in various human intelligence tasks, similar to traditional software, DNNs could also exhibit incorrect behaviors caused by hidden defects causing severe accidents losses. this paper, we propose...