Hao Wu

ORCID: 0000-0003-2324-2152
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Digital Media Forensic Detection
  • Generative Adversarial Networks and Image Synthesis
  • Metabolomics and Mass Spectrometry Studies
  • Advanced Neural Network Applications
  • Advanced Steganography and Watermarking Techniques
  • Exercise and Physiological Responses
  • Face recognition and analysis
  • Digital and Cyber Forensics
  • Bacillus and Francisella bacterial research
  • Biometric Identification and Security
  • Ginseng Biological Effects and Applications

Nanjing University of Information Science and Technology
2023-2024

Face Recognition (FR) systems, while widely used across various sectors, are vulnerable to adversarial attacks, particularly those based on deep neural networks. Despite existing efforts enhance the robustness of FR models, they still face risk secondary attacks. To address this, we propose a novel approach employing "strengthened face" with preemptive defensive perturbations. Strengthened ensures original recognition accuracy safeguarding systems against In white-box scenario, strengthened...

10.1109/tai.2025.3527923 article EN IEEE Transactions on Artificial Intelligence 2025-01-01

Invisible watermarking can be used as an important tool for copyright certification in the Metaverse. However, with advent of deep learning, Deep Neural Networks (DNNs) have posed new threats to this technique. For example, artificially trained DNNs perform unauthorized content analysis and achieve illegal access protected images. Furthermore, some specially crafted may even erase invisible watermarks embedded within images, which eventually leads collapse protection mechanism. To address...

10.1145/3652608 article EN ACM Transactions on Multimedia Computing Communications and Applications 2024-03-14

Deep neural networks are vulnerable to adversarial examples. Although the example has superior white-box attack success rate, its transferability is poor under black-box setting. Momentum often integrated into attacks so as prevent examples from overfitting source model and improve of How-ever, conventional momentum merely accumulates few gradients during early iterations, resulting in already model. Therefore, we propose Experienced (EM), which trained on a set models derived by Random...

10.1109/ijcnn54540.2023.10191329 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2023-06-18
Coming Soon ...