- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Advanced Malware Detection Techniques
- Topic Modeling
- Network Security and Intrusion Detection
- Domain Adaptation and Few-Shot Learning
- Advanced Neural Network Applications
- Privacy-Preserving Technologies in Data
- Software Testing and Debugging Techniques
- Text and Document Classification Technologies
- Multimodal Machine Learning Applications
- Hate Speech and Cyberbullying Detection
- Advanced Photonic Communication Systems
- Wireless Signal Modulation Classification
- Machine Learning and Algorithms
- Cardiac Arrest and Resuscitation
- Internet Traffic Analysis and Secure E-voting
- Explainable Artificial Intelligence (XAI)
- Gaussian Processes and Bayesian Inference
- Brain Tumor Detection and Classification
- Simulation Techniques and Applications
- Chaos-based Image/Signal Encryption
- Sentiment Analysis and Opinion Mining
- Terrorism, Counterterrorism, and Political Violence
- Advanced Steganography and Watermarking Techniques
Tencent (China)
2024
Hong Kong University of Science and Technology
2024
University of Hong Kong
2024
Macquarie University
2021-2024
University of Illinois Urbana-Champaign
2018-2023
Nanjing University of Science and Technology
2021
University of California, Berkeley
2017-2020
China Southern Power Grid (China)
2020
Tsinghua University
2020
Beihang University
2018-2019
Deep learning models have achieved high performance on many tasks, and thus been applied to security-critical scenarios. For example, deep learning-based face recognition systems used authenticate users access security-sensitive applications like payment apps. Such usages of provide the adversaries with sufficient incentives perform attacks against these for their adversarial purposes. In this work, we consider a new type attacks, called backdoor where attacker's goal is create into...
Deep learning (DL) has achieved remarkable progress over the past decade and been widely applied to many industry domains. However, robustness of DL systems recently becomes great concerns, where minor perturbation on input might cause malfunction. These issues could potentially result in severe consequences when a system is deployed safety-critical applications hinder real-world deployment systems. Testing techniques enable evaluation vulnerable issue detection at an early stage. The main...
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time attack that injects trigger pattern into small proportion of data so as control the model's prediction at test time. Backdoor attacks notably dangerous since they do not affect performance on clean examples, yet can fool model make incorrect whenever appears during testing. In this paper, we propose novel defense framework Neural Attention Distillation (NAD) erase triggers from backdoored DNNs. NAD utilizes...
Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying architectures. Meanwhile, interval bound propagation (IBP) efficient and significantly outperforms methods many tasks, yet it may suffer from stability issues since are much looser especially at beginning training. In this paper,...
Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications. Nonetheless, unique data properties inspired distinct powerful learning principles, this paper aims explore their potentials towards mitigating inputs. In particular, our results reveal the importance of using temporal dependency in audio gain discriminate power against examples. Tested on automatic speech recognition (ASR) tasks three recent...
Deep learning (DL) has achieved remarkable progress over the past decade and been widely applied to many safety-critical applications. However, robustness of DL systems recently receives great concerns, such as adversarial examples against computer vision systems, which could potentially result in severe consequences. Adopting testing techniques help evaluate a system therefore detect vulnerabilities at an early stage. The main challenge is that its runtime state space too large: if we view...
Adversarial attacks against natural language processing systems, which perform seemingly innocuous modifications to inputs, can induce arbitrary mistakes the target models. Though raised great concerns, such adversarial be leveraged estimate robustness of NLP Compared with example generation in continuous data domain (e.g., image), generating text that preserves original meaning is challenging since space discrete and non-differentiable. To handle these challenges, we propose a...
Training deep neural networks from scratch could be computationally expensive and requires a lot of training data. Recent work has explored different watermarking techniques to protect the pre-trained potential copyright infringements. However, these vulnerable watermark removal attacks. In this work, we propose REFIT, unified framework based on fine-tuning, which does not rely knowledge watermarks, is effective against wide range schemes. particular, conduct comprehensive study realistic...
The widespread deployment of wireless sensor networks and the burgeoning internet things (IoT) are enabling devices to be connected in wider denser ecosystems myriad applications, making security paramount importance. Radio Frequency (RF) fingerprinting has potential enhance with increasing popularity deep learning, RF approaches have attracted increased attention. In this paper we propose a graphical learning approach for which can paired either convolutional or dense neural network...
The emergence of the internet things (IoT) as a global infrastructure interconnected network heterogeneous wireless devices and sensors is opening new opportunities in myriad applications. This growing pervasiveness IoT devices, however, leading to concerns regarding security privacy. Radio Frequency (RF) fingerprinting techniques operating at physical layer can be used provide an additional protection ensure trustworthy communications between address these concerns. We present graphical...
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks. However, recent studies shown that GNNs are vulnerable adversarial attacks which aim mislead the node (or subgraph) classification prediction by adding subtle perturbations. In particular, several against proposed adding/deleting a small amount of edges, caused serious security concerns. Detecting these is challenging due magnitude perturbation discrete nature...
Domain generation algorithms, called DGAs, are used to generate a lot of pseudo-random domain names. The malware can connect command & control(C2) server through these domains, which will cause large threats network security. Most previous researches based on sets domains or manual feature extractions. To tackle this issue, current studies pay more attention deep learning, such as LSTM. However, it is difficult learn reasonable expression when the long. In paper, we propose LSTM model...
The existing studies in cross-language information retrieval (CLIR) mostly rely on general text representation models (e.g., vector space model or latent semantic analysis). These are not optimized for the target task. In this paper, we follow success of neural natural language processing (NLP) and develop a novel based adversarial learning, which seeks task-specific embedding CLIR. Adversarial learning is implemented as an interplay between generator process discriminator process. order to...
Agriculture is not only China's primary industry but also the foundation of national economy. The amount and quality agricultural products are inextricably linked to people's daily life. outbreak pests diseases in field has a great impact on production, so it can be seen that prevention control very important. In order crop pests, this paper combines emerging machine learning techniques based large number pest disease pictures, introduces two kinds convolutional neural networks------AlexNet...
The ability to generalize under distributional shifts is essential reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i.i.d$ testing data. Recently, invariant learning methods for out-of-distribution (OOD) generalization propose find causally relationships multi-environments. However, modern datasets are frequently multi-sourced without explicit source labels, rendering many inapplicable. In this paper, we Kernelized Heterogeneous Risk...
In recent years, malicious programs seriously threaten the security of information system. Because its particularity, complexity and vulnerability, power system is difficult to detect kill by traditional anti-virus software. To solve above problems, this paper proposes a behavior detection method based on deep learning, which can identify attack types according activities software behaviors. paper, hybrid learning structure convolutional neural network (CNN) long-and-short term memory (LSTM)...