- Adversarial Robustness in Machine Learning
- Network Security and Intrusion Detection
- Medical Imaging Techniques and Applications
- Anomaly Detection Techniques and Applications
- Privacy-Preserving Technologies in Data
- Advanced Malware Detection Techniques
- Magnesium Alloys: Properties and Applications
- Advanced Neural Network Applications
- Advanced Image Processing Techniques
- Image and Signal Denoising Methods
- Aluminum Alloys Composites Properties
- Metallurgy and Material Forming
- Advanced X-ray and CT Imaging
- Digital Radiography and Breast Imaging
- Explainable Artificial Intelligence (XAI)
- Advanced Welding Techniques Analysis
University of Chinese Academy of Sciences
2020-2024
Institute of Information Engineering
2019-2024
Guangdong University of Technology
2024
Intelligent Health (United Kingdom)
2023
Harbin Institute of Technology
2022
Chinese Academy of Sciences
2019-2020
Ningbo University
2018
Deep learning has gained tremendous success and great popularity in the past few years. However, deep systems are suffering several inherent weaknesses, which can threaten security of models. learning's wide use further magnifies impact consequences. To this end, lots research been conducted with purpose exhaustively identifying intrinsic weaknesses subsequently proposing feasible mitigation. Yet clear about how these incurred effective attack approaches assaulting learning. In order to...
Deep neural networks (DNNs) have revolutionized the field of computer vision like object detection with their unparalleled performance. However, existing research has shown that DNNs are vulnerable to adversarial attacks. In physical world, an adversary could exploit patches implement a Hiding Attack (HA) which target make it disappear from detector, and Appearing (AA) fools detector into misclassifying patch as specific object. Recently, many defense methods for detectors been proposed...
Machine unlearning has great significance in guaranteeing model security and protecting user privacy. Additionally, many legal provisions clearly stipulate that users have the right to demand providers delete their own data from training set, is, be forgotten. The naive way of is retrain without it scratch, which becomes extremely time resource consuming at modern scale deep neural networks. Other approaches by refactoring or struggle gain a balance between overhead usability. In this paper,...
Deep learning has gained tremendous success and great popularity in the past few years. However, deep systems are suffering several inherent weaknesses, which can threaten security of models. learning's wide use further magnifies impact consequences. To this end, lots research been conducted with purpose exhaustively identifying intrinsic weaknesses subsequently proposing feasible mitigation. Yet clear about how these incurred effective attack approaches assaulting learning. In order to...
Abstract So far, deep learning based networks have been wildly applied in Low-Dose Computed Tomography (LDCT) image denoising. However, they usually adopt symmetric convolution to achieve regular feature extraction, but cannot effectively extract irregular features. Therefore, this paper, an Irregular Feature Enhancer (IFE) focusing on extracting features is proposed by combining Symmetric-Asymmetric-Synergy Convolution Module (SASCM) with a hybrid loss module. Rather than simply stacking...
Deep neural networks (DNNs) have revolutionized the field of computer vision like object detection with their unparalleled performance. However, existing research has shown that DNNs are vulnerable to adversarial attacks. In physical world, an adversary could exploit patches implement a Hiding Attack (HA) which target make it disappear from detector, and Appearing (AA) fools detector into misclassifying patch as specific object. Recently, many defense methods for detectors been proposed...
While enjoying the great achievements brought by deep learning (DL), people are also worried about decision made DL models, since high degree of non-linearity models makes extremely difficult to understand. Consequently, attacks such as adversarial easy carry out, but detect and explain, which has led a boom in research on local explanation methods for explaining model decisions. In this paper, we evaluate faithfulness find that traditional tests encounter random dominance problem, \ie,...