Li Wang

ORCID: 0000-0003-0181-712X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Advanced Malware Detection Techniques
  • Anomaly Detection Techniques and Applications
  • Software Testing and Debugging Techniques
  • Advanced Neural Network Applications
  • Power Line Inspection Robots
  • Erosion and Abrasive Machining
  • Magnetic confinement fusion research
  • Face recognition and analysis
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Advanced Surface Polishing Techniques
  • High-Temperature Coating Behaviors
  • Mobile Agent-Based Network Management
  • Artificial Immune Systems Applications
  • Network Security and Intrusion Detection
  • Space Satellite Systems and Control
  • Radiation Effects in Electronics
  • Gaussian Processes and Bayesian Inference
  • Integrated Circuits and Semiconductor Failure Analysis
  • Color perception and design
  • Underwater Vehicles and Communication Systems
  • Generative Adversarial Networks and Image Synthesis
  • Markov Chains and Monte Carlo Methods
  • Ergonomics and Musculoskeletal Disorders
  • Face and Expression Recognition

Guilin University of Electronic Technology
2023

Anhui University of Science and Technology
2011-2022

Hefei Institutes of Physical Science
2022

Institute of Plasma Physics
2022

Chinese Academy of Sciences
2022

Tianjin University
2020

ANT Foundation Italy Onlus
2020

Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against attack. In this paper, we propose BAERASER, a novel method that can erase backdoor injected into victim model through machine unlearning. Specifically, BAERASER mainly implements in two key steps. First, trigger pattern recovery conducted extract patterns infected by model. Here, problem equivalent one extracting unknown noise distribution...

10.1109/infocom48880.2022.9796974 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2022-05-02

Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs subject to attacks such as adversarial perturbation thus must be properly tested. Many coverage criteria DNN since have been proposed, inspired by the success of code software programs. The expectation if a well tested (and retrained) according criteria, it more likely robust. In this work, we conduct an empirical...

10.1109/iceccs51672.2020.00016 article EN 2020-10-01

Deep neural network approaches have made remarkable progress in many machine learning tasks. However, the latest research indicates that they are vulnerable to adversarial perturbations. An adversary can easily mislead models by adding well-designed perturbations input. The cause of examples is unclear. Therefore, it challenging build a defense mechanism. In this paper, we propose an image-to-image translation model defend against examples. proposed based on conditional generative network,...

10.1155/2020/3932584 article EN cc-by Security and Communication Networks 2020-08-25

Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs subject to attacks such as adversarial perturbation thus must be properly tested. Many coverage criteria DNN since have been proposed, inspired by the success of code software programs. The expectation if a well tested (and retrained) according criteria, it more likely robust. In this work, we conduct an empirical...

10.48550/arxiv.1911.05904 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Bayesian deep learning is recently regarded as an intrinsic way to characterize the weight uncertainty of neural networks (DNNs). Stochastic Gradient Langevin Dynamics (SGLD) effective method enable on large-scale datasets. Previous theoretical studies have shown various appealing properties SGLD, ranging from convergence generalization bounds. In this paper, we study SGLD a novel perspective membership privacy protection (i.e., preventing attack). The attack, which aims determine whether...

10.1609/aaai.v34i04.6107 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2020-04-03

New botnet and bots using P2P protocols have become the increasing threat to network security because do not a centralized point trace back or shut down, thus detecting is very difficult. In order deal with these threats, model in terms of dendritic cells algorithm (DCA) presented detect on an individual host. The detailed approach also described. raw data for detection are obtained via APITrace tool. processes ID mapped into antigens, behavioral created by signals, which time series input...

10.1109/icedif.2015.7280211 article EN 2015-01-01

Tungsten is a promising candidate for plasma-facing material in tokamaks. This study aims at determining whether the cut-surface roughness of rolled tungsten plates (RTPs) can be improved using an abrasive water jet (AWJ) through tests combined with modeling cutting-process parameters. Based on key factors affecting roughness, magnitude and type test were increased, compared previous study. A batch was designed conducted to investigate use AWJ cut RTP obtain more comprehensive sample data....

10.1063/5.0114050 article EN cc-by AIP Advances 2022-09-01

Nowadays, the impressive performance of deep neural networks (DNNs) greatly advances development Internet Things (IoT) in diverse scenarios. However, exceptional vulnerability DNNs to adversarial attack leads IoT devices be exposed potential security issues. Up now, since training empirically remains robust against gradient-based attacks, it is believed most effective defense method. In this article, we find that examples generated by attacks tend less imperceptible induced optimization...

10.1109/jiot.2021.3138969 article EN IEEE Internet of Things Journal 2021-12-28

The vulnerability of deep neural networks to adversarial perturbations has been demonstrated in a large body research. Compared image-dependent perturbations, universal perturbations(UAPs) is more challenging for indiscriminately attacking the model inputs. However, there are few studies on generating data-free targeted UAPs and attack success rate latest method remains unsatisfactory. Not only that, fewer have implemented their approach Transformers its efficacy uncertain. Therefore, novel...

10.1109/access.2023.3335094 article EN cc-by-nc-nd IEEE Access 2023-01-01

Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against attack. In this paper, we propose BAERASE, a novel method that can erase backdoor injected into victim model through machine unlearning. Specifically, BAERASE mainly implements in two key steps. First, trigger pattern recovery conducted extract patterns infected by model. Here, problem equivalent one extracting unknown noise distribution...

10.48550/arxiv.2201.09538 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01

The high-payload transporter is a key part of the multipurpose deployer, which important for realizing remote handling tokamaks. This paper presents an analysis joint between and design test platform. kinematic, structural, load characteristics main joints heavy-duty manipulator were analyzed according to peculiarity high precision, large length, heavy transporter. Joint No. 4 (J4) was determined be peak torque, bending moment, shear force J4 under working conditions seismic via numerical...

10.1063/5.0122651 article EN cc-by AIP Advances 2022-10-01

Human design is an inevitable trend and development in product design. To achieve the goal of human design, master's physiology, psychology, data key that must be followed. Among them, body size designer to determine important basis for dimensions. The article first analyzes characteristics size, application method procedures, finally, small kitchen appliances storage cabinets as example, analyzing

10.4028/www.scientific.net/amr.291-294.2485 article EN Advanced materials research 2011-07-01
Coming Soon ...