- Adversarial Robustness in Machine Learning
- Advanced Neural Network Applications
- Software Testing and Debugging Techniques
- Anomaly Detection Techniques and Applications
- Explainable Artificial Intelligence (XAI)
- Bacillus and Francisella bacterial research
- 3D Shape Modeling and Analysis
- Image Processing and 3D Reconstruction
- Software Reliability and Analysis Research
- Formal Methods in Verification
- Autonomous Vehicle Technology and Safety
- Safety Systems Engineering in Autonomy
- Geotechnical Engineering and Soil Stabilization
- Civil and Geotechnical Engineering Research
- Real-time simulation and control systems
- Parallel Computing and Optimization Techniques
- Advanced Control Systems Optimization
- Infectious Encephalopathies and Encephalitis
- Robotics and Sensor-Based Localization
- Geotechnical Engineering and Analysis
- Real-Time Systems Scheduling
- Embedded Systems Design Techniques
- Advanced Malware Detection Techniques
- Distributed and Parallel Computing Systems
- Scientific Computing and Data Management
Institute of Software
2020-2024
University of Chinese Academy of Sciences
2020-2024
Chinese Academy of Sciences
2021
Deep neural networks (DNNs) have been applied in safety-critical domains such as self driving cars, aircraft collision avoidance systems, malware detection, etc. In scenarios, it is important to give a safety guarantee the robustness property, namely that outputs are invariant under small perturbations on inputs. For this purpose, several algorithms and tools developed recently. paper, we present PRODeep, platform for verification of DNNs. PRODeep incorporates constraint-based,...
To analyse local robustness properties of deep neural networks (DNNs), we present a practical framework from model learning perspective. Based on black-box with scenario optimisation, abstract the behaviour DNN via an affine probably approximately correct (PAC) guarantee. From learned model, can infer corresponding PAC-model property. The innovation our work is integration into PAC analysis: that is, construct guarantee level instead sample distribution, which induces more faithful and...
We propose a spurious region guided refinement approach for robustness verification of deep neural networks. Our method starts with applying the DeepPoly abstract domain to analyze network. If property cannot be verified, result is inconclusive. Due over-approximation, computed in abstraction may sense that it does not contain any true counterexample. goal identify such regions and use them guide refinement. The core idea make obtained constraints infer new bounds neurons. This achieved by...
Transferable adversarial examples raise critical security concerns in real-world, black-box attack scenarios. However, this work, we identify two main problems common evaluation practices: (1) For transferability, lack of systematic, one-to-one comparison and fair hyperparameter settings. (2) stealthiness, simply no comparisons. To address these problems, establish new guidelines by proposing a novel categorization strategy conducting systematic intra-category analyses on considering diverse...
Transfer adversarial attacks raise critical security concerns in real-world, black-box scenarios. However, the actual progress of this field is difficult to assess due two common limitations existing evaluations. First, different methods are often not systematically and fairly evaluated a one-to-one comparison. Second, only transferability but another key attack property, stealthiness, largely overlooked. In work, we design good practices address these limitations, present first...
Stochastic discrete-time systems, i.e., dynamic systems subject to stochastic disturbances, are an essential modelling tool for many engineering and reach-avoid analysis is able guarantee safety (i.e., via avoiding unsafe sets) progress reaching target sets). In this paper we study the problem of over open not bounded a priori) time horizons. The system interest modeled by iterative polynomial maps with addressed effectively compute inner approximation its p-reach-avoid set. set collects...
We propose a spurious region guided refinement approach for robustness verification of deep neural networks. Our method starts with applying the DeepPoly abstract domain to analyze network. If property cannot be verified, result is inconclusive. Due over-approximation, computed in abstraction may sense that it does not contain any true counterexample. goal identify such regions and use them guide refinement. The core idea make obtained constraints infer new bounds neurons. This achieved by...
Deep neural networks (DNNs) are increasingly deployed in safety-critical domains, but their vulnerability to adversarial attacks poses serious safety risks. Existing neuron-level methods using limited data lack efficacy fixing adversaries due the inherent complexity of attack mechanisms, while training, leveraging a large number samples enhance robustness, lacks provability. In this paper, we propose ADVREPAIR, novel approach for provable repair data. By utilizing formal verification,...
Classification of 3D point clouds is a challenging machine learning (ML) task with important real-world applications in spectrum from autonomous driving and robot-assisted surgery to earth observation low orbit. As other ML tasks, classification models are notoriously brittle the presence adversarial attacks. These rooted imperceptible changes inputs effect that seemingly well-trained model ends up misclassifying input. This paper adds understanding attacks by presenting Eidos, framework...
In recent years, the study of adversarial robustness in object detection systems, particularly those based on deep neural networks (DNNs), has become a pivotal area research. Traditional physical attacks targeting detectors, such as patches and texture manipulations, directly manipulate surface object. While these methods are effective, their overt manipulation objects may draw attention real-world applications. To address this, this paper introduces more subtle approach: an inconspicuous...
Abstract We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build surrogate model that quantitatively depicts behavior an ADS in specified traffic scenario. properties proved resulting apply original with probabilistic guarantee. Given complexity scenario driving, our approach further partitions parameter space into safe sub-spaces varying levels guarantees and unsafe confirmed counter-examples. Innovatively,...
In this paper, we propose a framework of filter-based ensemble deep neuralnetworks (DNNs) to defend against adversarial attacks. The builds an sub-models -- DNNs with differentiated preprocessing filters. From the theoretical perspective DNN robustness, argue that under assumption high quality filters, weaker correlations sensitivity filters are, more robust model tends be, and is corroborated by experiments transfer-based Correspondingly, principle chooses specific smaller Pearson...
We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build surrogate model that quantitatively depicts behaviour an ADS in specified traffic scenario. properties proved resulting apply original with probabilistic guarantee. Furthermore, we explore safe and unsafe parameter space scenario hazards. demonstrate utility proposed approach by evaluating on state-of-the-art literature, variety simulated scenarios.
To analyse local robustness properties of deep neural networks (DNNs), we present a practical framework from model learning perspective. Based on black-box with scenario optimisation, abstract the behaviour DNN via an affine probably approximately correct (PAC) guarantee. From learned model, can infer corresponding PAC-model property. The innovation our work is integration into PAC analysis: that is, construct guarantee level instead sample distribution, which induces more faithful and...