Guijian Tang

ORCID: 0000-0003-4022-1142
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Probabilistic and Robust Engineering Design
  • Anomaly Detection Techniques and Applications
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Structural Health Monitoring Techniques
  • Bacillus and Francisella bacterial research
  • Advanced Neural Network Applications
  • Integrated Circuits and Semiconductor Failure Analysis
  • Advanced Multi-Objective Optimization Algorithms
  • Reliability and Maintenance Optimization
  • Fatigue and fracture mechanics
  • Advanced SAR Imaging Techniques
  • Risk and Safety Analysis
  • Fault Detection and Control Systems

National University of Defense Technology
2017-2024

Chinese People's Liberation Army
2023

Abstract In the past decade, deep learning has dramatically changed traditional hand-craft feature manner with strong capability, promoting tremendous improvement of conventional tasks. However, neural networks (DNNs) have been demonstrated to be vulnerable adversarial examples crafted by small noise, which is imperceptible human observers but can make DNNs misbehave. Existing attacks divided into digital and physical attacks. The former designed pursue attack performance in lab environments...

10.21203/rs.3.rs-2459893/v1 preprint EN cc-by Research Square (Research Square) 2023-01-11

Most existing adversarial attack methods against detectors involve adding perturbations to benign images synthesiz examples. However, directly applying these methods, originally designed for natural image detectors, optical aerial can lead that appear unnatural and suspicious human eyes, owing intrinsic dissimilarities between two types of images. Inspired by the fact captured are heavily affected weather conditions, this paper proposes a novel method conducting attacks leveraging...

10.1109/tgrs.2023.3315053 article EN IEEE Transactions on Geoscience and Remote Sensing 2023-01-01

Over the past decade, deep learning has revolutionized conventional tasks that rely on hand-craft feature extraction with its strong capability, leading to substantial enhancements in traditional tasks. However, neural networks (DNNs) have been demonstrated be vulnerable adversarial examples crafted by malicious tiny noise, which is imperceptible human observers but can make DNNs output wrong result. Existing attacks categorized into digital and physical attacks. The former designed pursue...

10.48550/arxiv.2209.14262 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Deep neural networks (DNNs) have made remarkable strides in various computer vision tasks, including image classification, segmentation, and object detection. However, recent research has revealed a vulnerability advanced DNNs when faced with deliberate manipulations of input data, known as adversarial attacks. Moreover, the accuracy is heavily influenced by distribution training dataset. Distortions or perturbations color space images can introduce out-of-distribution resulting...

10.48550/arxiv.2305.14165 preprint EN other-oa arXiv (Cornell University) 2023-01-01

The surrogate model-based uncertainty quantification method has drawn much attention in many engineering fields. Polynomial chaos expansion (PCE) and deep learning (DL) are powerful methods for building a model. However, PCE needs to increase the order improve accuracy of model, which causes more labeled data solve coefficients, DL also requires lot train neural network (DNN). First all, this paper proposes adaptive arbitrary polynomial (aPC) proves two properties about coefficients. Based...

10.48550/arxiv.2107.10428 preprint EN public-domain arXiv (Cornell University) 2021-01-01
Coming Soon ...