- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Software Testing and Debugging Techniques
- Advanced Malware Detection Techniques
- Ethics and Social Impacts of AI
- Advanced Text Analysis Techniques
- Explainable Artificial Intelligence (XAI)
- Image Processing and 3D Reconstruction
- Integrated Circuits and Semiconductor Failure Analysis
- Text and Document Classification Technologies
- Topic Modeling
- Geochemistry and Geologic Mapping
- Advanced Neural Network Applications
- Physical Unclonable Functions (PUFs) and Hardware Security
- Teleoperation and Haptic Systems
- Software Engineering Research
- Energy Efficient Wireless Sensor Networks
- Advanced Data Storage Technologies
- Augmented Reality Applications
- Differential Equations and Numerical Methods
- Advanced Steganography and Watermarking Techniques
- Virtual Reality Applications and Impacts
- Cryptography and Data Security
- Software Reliability and Analysis Research
- Natural Language Processing Techniques
Zhejiang University
2018-2023
Singapore Management University
2023
Shandong University of Science and Technology
2021-2022
Zhejiang University of Science and Technology
2020
Jilin University
2019
Hebei Normal University
2017
Jiangsu Provincial Institute of Geological Survey
2013
Xiamen University
2006
Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known vulnerable adversarial samples. By transforming normal sample with some carefully crafted human imperceptible perturbations, even highly accurate DNN make wrong decisions. Multiple defense mechanisms proposed which aim hinder the generation such recent work show that most them ineffective. In this work, we propose an alternative approach detect samples at runtime. Our main...
Although deep neural networks (DNNs) have demonstrated astonishing performance in many applications, there are still concerns on their dependability. One desirable property of DNN for applications with societal impact is fairness (i.e., non-discrimination). In this work, we propose a scalable approach searching individual discriminatory instances DNN. Compared state-of-the-art methods, our only employs lightweight procedures like gradient computation and clustering, which makes it...
Concolic testing integrates concrete execution (e.g., random testing) and symbolic for test case generation. It is shown to be more cost-effective than or sometimes. A concolic strategy a function which decides when apply execution, if it the latter case, program path symbolically execute. Many heuristics-based strategies have been proposed. still an open problem what optimal strategy. In this work, we make two contributions towards solving problem. First, show can defined based on...
Recently, it has been shown that deep neural networks (DNN) are subject to attacks through adversarial samples. Adversarial samples often crafted perturbation, i.e., manipulating the original sample with minor modifications so DNN model labels incorrectly. Given is almost impossible train perfect DNN, be easy generate. As increasingly used in safety-critical systems like autonomous cars, crucial develop techniques for defending such attacks. Existing defense mechanisms which aim make...
Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs subject to attacks such as adversarial perturbation thus must be properly tested. Many coverage criteria DNN since have been proposed, inspired by the success of code software programs. The expectation if a well tested (and retrained) according criteria, it more likely robust. In this work, we conduct an empirical...
Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsically embedded into the models due to training data. As a countermeasure, testing systemically identifies discriminatory samples, which can used retrain model and improve model's fairness. Existing approaches...
Recently, there has been significant growth of interest in applying software engineering techniques for the quality assurance deep learning (DL) systems. One popular direction is DL testing—that is, given a property test, defects systems are found either by fuzzing or guided search with help certain testing metrics. However, recent studies have revealed that neuron coverage metrics, which commonly used most existing approaches, not necessarily correlated model (e.g., robustness, studied...
Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs subject to attacks such as adversarial perturbation thus must be properly tested. Many coverage criteria DNN since have been proposed, inspired by the success of code software programs. The expectation if a well tested (and retrained) according criteria, it more likely robust. In this work, we conduct an empirical...
With the development of electronic technology and communication protocols, wireless sensor network is developing rapidly. In a sense, traditional static has been unable to meet needs new applications. However, introduction mobile nodes extends application networks, despite technical challenges. Because its flexibility, attracted great attention, even small, self-controlled devices have appeared. At present, node localization become one hotspots in networks. As storage energy limited, radius...
As deep image classification applications, e.g., face recognition, become increasingly prevalent in our daily lives, their fairness issues raise more and concern. It is thus crucial to comprehensively test the of these applications before deployment. Existing testing methods suffer from following limitations: 1) applicability, i.e., they are only applicable for structured data or text without handling high-dimensional abstract domain sampling semantic level applications; 2) functionality,...
In recent years, the security issues of artificial intelligence have become increasingly prominent due to rapid development deep learning research and applications. Backdoor attack is an targeting vulnerability models, where hidden backdoors are activated by triggers embedded attacker, thereby outputting malicious predictions that may not align with intended output for a given input. this work, we propose novel black-box backdoor based on machine unlearning. The attacker first augments...
With the development of large language models, multiple AIs are now made available for code generation (such as ChatGPT and StarCoder) adopted widely. It is often desirable to know whether a piece generated by AI, furthermore, which AI author. For instance, if certain version known generate vulnerable code, it particularly important creator. Existing approaches not satisfactory watermarking codes challenging compared with text data, can be altered relative ease via widely-used refactoring...
Deep learning has revolutionized computing in many real-world applications, arguably due to its remarkable performance and extreme convenience as an end-to-end solution. However, deep models can be costly train use, especially for those large-scale models, making it necessary optimize the original overly complicated into smaller ones scenarios with limited resources such mobile applications or simply resource saving. The key question model optimization is, how we effectively identify measure...
Despite the success of Large Language Models (LLMs) across various fields, their potential to generate untruthful, biased and harmful responses poses significant risks, particularly in critical applications. This highlights urgent need for systematic methods detect prevent such misbehavior. While existing approaches target specific issues as responses, this work introduces LLMScan, an innovative LLM monitoring technique based on causality analysis, offering a comprehensive solution. LLMScan...
AI-enabled collaborative robots are designed to be used in close collaboration with humans, thus requiring stringent safety standards and quick response times. Adversarial attacks pose a significant threat the deep learning models of these systems, making it crucial develop methods improve models' robustness against them. training is one approach their robustness: works by augmenting data adversarial examples. This, unfortunately, comes cost increased computational overhead extended In this...
Abstract Text classification has been a major hot topic in natural language processing. Despite the significant progress made this field, some deterrents have persisted, notable among which are features extraction from long and complicated sentences, sparse features. To address these issues, paper proposes method of text based on combination BERT LDA methods (called BERT+LDA), focusing effectively integrating two to present new fusion method. Our proposed begins with applying model document...
Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsically embedded into the models due to training data. As a countermeasure, testing systemically identifies discriminatory samples, which can used retrain model and improve model's fairness. Existing approaches...
Adversarial examples pose a security threat to many critical systems built on neural networks (such as face recognition systems, and self-driving cars). While methods have been proposed build robust models, how certifiably yet accurate network models remains an open problem. For example, adversarial training improves empirical robustness, but they do not provide certification of the model's robustness. On other hand, certified provides robustness at cost significant accuracy drop. In this...