Renjue Li

ORCID: 0000-0003-2472-0021
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Advanced Neural Network Applications
  • Software Testing and Debugging Techniques
  • Anomaly Detection Techniques and Applications
  • Explainable Artificial Intelligence (XAI)
  • Bacillus and Francisella bacterial research
  • 3D Shape Modeling and Analysis
  • Image Processing and 3D Reconstruction
  • Software Reliability and Analysis Research
  • Formal Methods in Verification
  • Autonomous Vehicle Technology and Safety
  • Safety Systems Engineering in Autonomy
  • Geotechnical Engineering and Soil Stabilization
  • Civil and Geotechnical Engineering Research
  • Real-time simulation and control systems
  • Parallel Computing and Optimization Techniques
  • Advanced Control Systems Optimization
  • Infectious Encephalopathies and Encephalitis
  • Robotics and Sensor-Based Localization
  • Geotechnical Engineering and Analysis
  • Real-Time Systems Scheduling
  • Embedded Systems Design Techniques
  • Advanced Malware Detection Techniques
  • Distributed and Parallel Computing Systems
  • Scientific Computing and Data Management

Institute of Software
2020-2024

University of Chinese Academy of Sciences
2020-2024

Chinese Academy of Sciences
2021

Deep neural networks (DNNs) have been applied in safety-critical domains such as self driving cars, aircraft collision avoidance systems, malware detection, etc. In scenarios, it is important to give a safety guarantee the robustness property, namely that outputs are invariant under small perturbations on inputs. For this purpose, several algorithms and tools developed recently. paper, we present PRODeep, platform for verification of DNNs. PRODeep incorporates constraint-based,...

10.1145/3368089.3417918 article EN 2020-11-08

To analyse local robustness properties of deep neural networks (DNNs), we present a practical framework from model learning perspective. Based on black-box with scenario optimisation, abstract the behaviour DNN via an affine probably approximately correct (PAC) guarantee. From learned model, can infer corresponding PAC-model property. The innovation our work is integration into PAC analysis: that is, construct guarantee level instead sample distribution, which induces more faithful and...

10.1145/3510003.3510143 article EN Proceedings of the 44th International Conference on Software Engineering 2022-05-21

We propose a spurious region guided refinement approach for robustness verification of deep neural networks. Our method starts with applying the DeepPoly abstract domain to analyze network. If property cannot be verified, result is inconclusive. Due over-approximation, computed in abstraction may sense that it does not contain any true counterexample. goal identify such regions and use them guide refinement. The core idea make obtained constraints infer new bounds neurons. This achieved by...

10.26226/morressier.604907f41a80aac83ca25cfb preprint EN 2021-03-24

Transferable adversarial examples raise critical security concerns in real-world, black-box attack scenarios. However, this work, we identify two main problems common evaluation practices: (1) For transferability, lack of systematic, one-to-one comparison and fair hyperparameter settings. (2) stealthiness, simply no comparisons. To address these problems, establish new guidelines by proposing a novel categorization strategy conducting systematic intra-category analyses on considering diverse...

10.48550/arxiv.2310.11850 preprint EN cc-by-nc-sa arXiv (Cornell University) 2023-01-01

Transfer adversarial attacks raise critical security concerns in real-world, black-box scenarios. However, the actual progress of this field is difficult to assess due two common limitations existing evaluations. First, different methods are often not systematically and fairly evaluated a one-to-one comparison. Second, only transferability but another key attack property, stealthiness, largely overlooked. In work, we design good practices address these limitations, present first...

10.48550/arxiv.2211.09565 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01

Stochastic discrete-time systems, i.e., dynamic systems subject to stochastic disturbances, are an essential modelling tool for many engineering and reach-avoid analysis is able guarantee safety (i.e., via avoiding unsafe sets) progress reaching target sets). In this paper we study the problem of over open not bounded a priori) time horizons. The system interest modeled by iterative polynomial maps with addressed effectively compute inner approximation its p-reach-avoid set. set collects...

10.23919/acc50511.2021.9483095 article EN 2022 American Control Conference (ACC) 2021-05-25

We propose a spurious region guided refinement approach for robustness verification of deep neural networks. Our method starts with applying the DeepPoly abstract domain to analyze network. If property cannot be verified, result is inconclusive. Due over-approximation, computed in abstraction may sense that it does not contain any true counterexample. goal identify such regions and use them guide refinement. The core idea make obtained constraints infer new bounds neurons. This achieved by...

10.48550/arxiv.2010.07722 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Deep neural networks (DNNs) are increasingly deployed in safety-critical domains, but their vulnerability to adversarial attacks poses serious safety risks. Existing neuron-level methods using limited data lack efficacy fixing adversaries due the inherent complexity of attack mechanisms, while training, leveraging a large number samples enhance robustness, lacks provability. In this paper, we propose ADVREPAIR, novel approach for provable repair data. By utilizing formal verification,...

10.48550/arxiv.2404.01642 preprint EN arXiv (Cornell University) 2024-04-02

Classification of 3D point clouds is a challenging machine learning (ML) task with important real-world applications in spectrum from autonomous driving and robot-assisted surgery to earth observation low orbit. As other ML tasks, classification models are notoriously brittle the presence adversarial attacks. These rooted imperceptible changes inputs effect that seemingly well-trained model ends up misclassifying input. This paper adds understanding attacks by presenting Eidos, framework...

10.48550/arxiv.2405.14210 preprint EN arXiv (Cornell University) 2024-05-23

In recent years, the study of adversarial robustness in object detection systems, particularly those based on deep neural networks (DNNs), has become a pivotal area research. Traditional physical attacks targeting detectors, such as patches and texture manipulations, directly manipulate surface object. While these methods are effective, their overt manipulation objects may draw attention real-world applications. To address this, this paper introduces more subtle approach: an inconspicuous...

10.48550/arxiv.2410.10091 preprint EN arXiv (Cornell University) 2024-10-13

Abstract We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build surrogate model that quantitatively depicts behavior an ADS in specified traffic scenario. properties proved resulting apply original with probabilistic guarantee. Given complexity scenario driving, our approach further partitions parameter space into safe sub-spaces varying levels guarantees and unsafe confirmed counter-examples. Innovatively,...

10.1017/cbp.2024.7 article EN cc-by-nc-nd Research Directions Cyber-Physical Systems 2024-12-13

In this paper, we propose a framework of filter-based ensemble deep neuralnetworks (DNNs) to defend against adversarial attacks. The builds an sub-models -- DNNs with differentiated preprocessing filters. From the theoretical perspective DNN robustness, argue that under assumption high quality filters, weaker correlations sensitivity filters are, more robust model tends be, and is corroborated by experiments transfer-based Correspondingly, principle chooses specific smaller Pearson...

10.48550/arxiv.2106.02867 preprint EN other-oa arXiv (Cornell University) 2021-01-01

We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build surrogate model that quantitatively depicts behaviour an ADS in specified traffic scenario. properties proved resulting apply original with probabilistic guarantee. Furthermore, we explore safe and unsafe parameter space scenario hazards. demonstrate utility proposed approach by evaluating on state-of-the-art literature, variety simulated scenarios.

10.48550/arxiv.2211.12733 preprint EN other-oa arXiv (Cornell University) 2022-01-01

To analyse local robustness properties of deep neural networks (DNNs), we present a practical framework from model learning perspective. Based on black-box with scenario optimisation, abstract the behaviour DNN via an affine probably approximately correct (PAC) guarantee. From learned model, can infer corresponding PAC-model property. The innovation our work is integration into PAC analysis: that is, construct guarantee level instead sample distribution, which induces more faithful and...

10.48550/arxiv.2101.10102 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...