Peixin Zhang

ORCID: 0000-0001-5039-5651
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Software Testing and Debugging Techniques
  • Advanced Malware Detection Techniques
  • Ethics and Social Impacts of AI
  • Advanced Text Analysis Techniques
  • Explainable Artificial Intelligence (XAI)
  • Image Processing and 3D Reconstruction
  • Integrated Circuits and Semiconductor Failure Analysis
  • Text and Document Classification Technologies
  • Topic Modeling
  • Geochemistry and Geologic Mapping
  • Advanced Neural Network Applications
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Teleoperation and Haptic Systems
  • Software Engineering Research
  • Energy Efficient Wireless Sensor Networks
  • Advanced Data Storage Technologies
  • Augmented Reality Applications
  • Differential Equations and Numerical Methods
  • Advanced Steganography and Watermarking Techniques
  • Virtual Reality Applications and Impacts
  • Cryptography and Data Security
  • Software Reliability and Analysis Research
  • Natural Language Processing Techniques

Zhejiang University
2018-2023

Singapore Management University
2023

Shandong University of Science and Technology
2021-2022

Zhejiang University of Science and Technology
2020

Jilin University
2019

Hebei Normal University
2017

Jiangsu Provincial Institute of Geological Survey
2013

Xiamen University
2006

Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known vulnerable adversarial samples. By transforming normal sample with some carefully crafted human imperceptible perturbations, even highly accurate DNN make wrong decisions. Multiple defense mechanisms proposed which aim hinder the generation such recent work show that most them ineffective. In this work, we propose an alternative approach detect samples at runtime. Our main...

10.1109/icse.2019.00126 preprint EN 2019-05-01

Although deep neural networks (DNNs) have demonstrated astonishing performance in many applications, there are still concerns on their dependability. One desirable property of DNN for applications with societal impact is fairness (i.e., non-discrimination). In this work, we propose a scalable approach searching individual discriminatory instances DNN. Compared state-of-the-art methods, our only employs lightweight procedures like gradient computation and clustering, which makes it...

10.1145/3377811.3380331 article EN 2020-06-27

Concolic testing integrates concrete execution (e.g., random testing) and symbolic for test case generation. It is shown to be more cost-effective than or sometimes. A concolic strategy a function which decides when apply execution, if it the latter case, program path symbolically execute. Many heuristics-based strategies have been proposed. still an open problem what optimal strategy. In this work, we make two contributions towards solving problem. First, show can defined based on...

10.1145/3180155.3180177 article EN Proceedings of the 44th International Conference on Software Engineering 2018-05-27

Recently, it has been shown that deep neural networks (DNN) are subject to attacks through adversarial samples. Adversarial samples often crafted perturbation, i.e., manipulating the original sample with minor modifications so DNN model labels incorrectly. Given is almost impossible train perfect DNN, be easy generate. As increasingly used in safety-critical systems like autonomous cars, crucial develop techniques for defending such attacks. Existing defense mechanisms which aim make...

10.48550/arxiv.1805.05010 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs subject to attacks such as adversarial perturbation thus must be properly tested. Many coverage criteria DNN since have been proposed, inspired by the success of code software programs. The expectation if a well tested (and retrained) according criteria, it more likely robust. In this work, we conduct an empirical...

10.1109/iceccs51672.2020.00016 article EN 2020-10-01

Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsically embedded into the models due to training data. As a countermeasure, testing systemically identifies discriminatory samples, which can used retrain model and improve model's fairness. Existing approaches...

10.1109/tse.2021.3101478 article EN IEEE Transactions on Software Engineering 2021-08-04

Recently, there has been significant growth of interest in applying software engineering techniques for the quality assurance deep learning (DL) systems. One popular direction is DL testing—that is, given a property test, defects systems are found either by fuzzing or guided search with help certain testing metrics. However, recent studies have revealed that neuron coverage metrics, which commonly used most existing approaches, not necessarily correlated model (e.g., robustness, studied...

10.1145/3582573 article EN ACM Transactions on Software Engineering and Methodology 2023-02-10

Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs subject to attacks such as adversarial perturbation thus must be properly tested. Many coverage criteria DNN since have been proposed, inspired by the success of code software programs. The expectation if a well tested (and retrained) according criteria, it more likely robust. In this work, we conduct an empirical...

10.48550/arxiv.1911.05904 preprint EN other-oa arXiv (Cornell University) 2019-01-01

With the development of electronic technology and communication protocols, wireless sensor network is developing rapidly. In a sense, traditional static has been unable to meet needs new applications. However, introduction mobile nodes extends application networks, despite technical challenges. Because its flexibility, attracted great attention, even small, self-controlled devices have appeared. At present, node localization become one hotspots in networks. As storage energy limited, radius...

10.3991/ijoe.v13i03.6868 article EN International Journal of Online and Biomedical Engineering (iJOE) 2017-03-28

As deep image classification applications, e.g., face recognition, become increasingly prevalent in our daily lives, their fairness issues raise more and concern. It is thus crucial to comprehensively test the of these applications before deployment. Existing testing methods suffer from following limitations: 1) applicability, i.e., they are only applicable for structured data or text without handling high-dimensional abstract domain sampling semantic level applications; 2) functionality,...

10.48550/arxiv.2111.08856 preprint EN other-oa arXiv (Cornell University) 2021-01-01

In recent years, the security issues of artificial intelligence have become increasingly prominent due to rapid development deep learning research and applications. Backdoor attack is an targeting vulnerability models, where hidden backdoors are activated by triggers embedded attacker, thereby outputting malicious predictions that may not align with intended output for a given input. this work, we propose novel black-box backdoor based on machine unlearning. The attacker first augments...

10.48550/arxiv.2310.10659 preprint EN other-oa arXiv (Cornell University) 2023-01-01

With the development of large language models, multiple AIs are now made available for code generation (such as ChatGPT and StarCoder) adopted widely. It is often desirable to know whether a piece generated by AI, furthermore, which AI author. For instance, if certain version known generate vulnerable code, it particularly important creator. Existing approaches not satisfactory watermarking codes challenging compared with text data, can be altered relative ease via widely-used refactoring...

10.48550/arxiv.2402.07518 preprint EN arXiv (Cornell University) 2024-02-12

Deep learning has revolutionized computing in many real-world applications, arguably due to its remarkable performance and extreme convenience as an end-to-end solution. However, deep models can be costly train use, especially for those large-scale models, making it necessary optimize the original overly complicated into smaller ones scenarios with limited resources such mobile applications or simply resource saving. The key question model optimization is, how we effectively identify measure...

10.48550/arxiv.2411.10507 preprint EN arXiv (Cornell University) 2024-11-15

Despite the success of Large Language Models (LLMs) across various fields, their potential to generate untruthful, biased and harmful responses poses significant risks, particularly in critical applications. This highlights urgent need for systematic methods detect prevent such misbehavior. While existing approaches target specific issues as responses, this work introduces LLMScan, an innovative LLM monitoring technique based on causality analysis, offering a comprehensive solution. LLMScan...

10.48550/arxiv.2410.16638 preprint EN arXiv (Cornell University) 2024-10-21

AI-enabled collaborative robots are designed to be used in close collaboration with humans, thus requiring stringent safety standards and quick response times. Adversarial attacks pose a significant threat the deep learning models of these systems, making it crucial develop methods improve models' robustness against them. training is one approach their robustness: works by augmenting data adversarial examples. This, unfortunately, comes cost increased computational overhead extended In this...

10.1109/lra.2023.3327934 article EN IEEE Robotics and Automation Letters 2023-10-27

Abstract Text classification has been a major hot topic in natural language processing. Despite the significant progress made this field, some deterrents have persisted, notable among which are features extraction from long and complicated sentences, sparse features. To address these issues, paper proposes method of text based on combination BERT LDA methods (called BERT+LDA), focusing effectively integrating two to present new fusion method. Our proposed begins with applying model document...

10.21203/rs.3.rs-2305862/v1 preprint EN cc-by Research Square (Research Square) 2022-11-28

Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsically embedded into the models due to training data. As a countermeasure, testing systemically identifies discriminatory samples, which can used retrain model and improve model's fairness. Existing approaches...

10.48550/arxiv.2107.08176 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Adversarial examples pose a security threat to many critical systems built on neural networks (such as face recognition systems, and self-driving cars). While methods have been proposed build robust models, how certifiably yet accurate network models remains an open problem. For example, adversarial training improves empirical robustness, but they do not provide certification of the model's robustness. On other hand, certified provides robustness at cost significant accuracy drop. In this...

10.48550/arxiv.2309.00879 preprint EN cc-by-nc-sa arXiv (Cornell University) 2023-01-01
Coming Soon ...