Run Wang

ORCID: 0000-0002-2842-5137
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Digital Media Forensic Detection
  • Generative Adversarial Networks and Image Synthesis
  • Advanced Malware Detection Techniques
  • Advanced Neural Network Applications
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Hate Speech and Cyberbullying Detection
  • Educational Technology and Pedagogy
  • Advanced Image Processing Techniques
  • Diverse Aspects of Tourism Research
  • Health disparities and outcomes
  • Anomaly Detection Techniques and Applications
  • Educational Technology and Assessment
  • Misinformation and Its Impacts
  • Ideological and Political Education
  • Advancements in Semiconductor Devices and Circuit Design
  • Higher Education and Teaching Methods
  • Transboundary Water Resource Management
  • Speech Recognition and Synthesis
  • Advanced Computational Techniques and Applications
  • Security and Verification in Computing
  • Domain Adaptation and Few-Shot Learning
  • Urban Transport and Accessibility
  • Cryptographic Implementations and Security
  • Integrated Circuits and Semiconductor Failure Analysis

Nanfang Hospital
2023-2025

Wuhan University
2015-2025

Southern Medical University
2023-2025

Sir Run Run Shaw Hospital
2025

Fujian Medical University
2024

Fudan University
2024

Zhongshan Hospital
2024

Beijing International Studies University
2024

Horizon Robotics (China)
2024

Tibet University
2024

In recent years, generative adversarial networks (GANs) and its variants have achieved unprecedented success in image synthesis. They are widely adopted synthesizing facial images which brings potential security concerns to humans as the fakes spread fuel misinformation. However, robust detectors of these AI-synthesized fake faces still their infancy not ready fully tackle this emerging challenge. work, we propose a novel approach, named FakeSpotter, based on monitoring neuron behaviors spot...

10.24963/ijcai.2020/476 preprint EN 2020-07-01

With the recent advances in voice synthesis, AI-synthesized fake voices are indistinguishable to human ears and widely applied produce realistic natural DeepFakes, exhibiting real threats our society. However, effective robust detectors for synthesized still their infancy not ready fully tackle this emerging threat. In paper, we devise a novel approach, named DeepSonar, based on monitoring neuron behaviors of speaker recognition (SR) system, i.e., deep neural network (DNN), discern voices....

10.1145/3394171.3413716 article EN Proceedings of the 30th ACM International Conference on Multimedia 2020-10-12

At this moment, GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving some certain artifact patterns the synthesized image. Such can be easily exploited (by recent methods) for difference detection of real and GAN-synthesized images. However, existing put much emphasis on patterns, which become futile if such were reduced.

10.1145/3394171.3413732 article EN Proceedings of the 30th ACM International Conference on Multimedia 2020-10-12

Deep neural networks (DNNs) have achieved remarkable success in various tasks (e.g., image classification, speech recognition, and natural language processing (NLP)). However, researchers demonstrated that DNN-based models are vulnerable to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies revealed examples the text domain, could effectively evade analyzers further bring threats of proliferation...

10.1109/tkde.2021.3117608 article EN publisher-specific-oa IEEE Transactions on Knowledge and Data Engineering 2021-01-01

In recent years, DeepFake is becoming a common threat to our society, due the remarkable progress of generative adversarial networks (GAN) in image synthesis. Unfortunately, existing studies that propose various approaches, fighting against and determining if facial real or fake, still at an early stage. Obviously, current detection method struggles catch rapid GANs, especially scenarios where attackers can evade intentionally, such as adding perturbations fool DNN-based detectors. While...

10.1145/3474085.3475518 article EN Proceedings of the 30th ACM International Conference on Multimedia 2021-10-17

DeepFake is becoming a real risk to society and brings potential threats both individual privacy political security due the DeepFaked multimedia are realistic convincing. However, popular passive detection an ex-post forensics countermeasure failed in blocking disinformation spreading advance. To address this limitation, researchers study proactive defense techniques by adding adversarial noises into source data disrupt manipulation. existing studies on via injecting not robust, which could...

10.24963/ijcai.2022/107 article EN Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022-07-01

Background Accurate characterization of suspicious small renal masses is crucial for optimized management. Deep learning (DL) algorithms may assist with this effort. Purpose To develop and validate a DL algorithm identifying benign at contrast-enhanced multiphase CT. Materials Methods Surgically resected measuring 3 cm or less in diameter CT were included. The was developed by using retrospective data from one hospital between 2009 2021, patients randomly allocated training internal test set...

10.1148/radiol.232178 article EN Radiology 2024-05-01

Deep neural networks (DNNs) have achieved remarkable success in various tasks (e.g., image classification, speech recognition, and natural language processing (NLP)). However, researchers demonstrated that DNN-based models are vulnerable to adversarial examples, which cause erroneous predictions by adding imperceptible perturbations into legitimate inputs. Recently, studies revealed examples the text domain, could effectively evade analyzers further bring threats of proliferation...

10.48550/arxiv.1902.07285 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Nowadays, digital facial content manipulation has become ubiquitous and realistic with the success of generative adversarial networks (GANs), making face recognition (FR) systems suffer from unprecedented security concerns. In this paper, we investigate introduce a new type attack to evade FR by manipulating content, called morphing (a.k.a. Amora). contrast noise that perturbs pixel intensity values adding human-imperceptible noise, our proposed works at semantic level pixels spatially in...

10.1145/3394171.3413544 article EN Proceedings of the 30th ACM International Conference on Multimedia 2020-10-12

The pervasive prevalence of DeepFakes poses a profound threat to individual privacy and the stability society. Believing synthetic videos celebrity trumping up impersonated forgery as authentic are just few consequences generated by DeepFakes. We investigate current detectors that blindly deploy deep learning techniques not effective in capturing subtle clues when generative models produce remarkably realistic faces. Inspired fact operations inevitably modify regions eyes mouth match target...

10.1155/int/7945646 article EN cc-by International Journal of Intelligent Systems 2025-01-01

Treatment decisions for an incidental renal mass are mostly made with pathologic uncertainty. Improving the diagnosis of benign masses and distinguishing aggressive cancers from indolent ones is key to better treatment selection. We analyze 13261 pre-operative computed tomography (CT) volumes 4557 patients. Two multi-phase convolutional neural networks developed predict malignancy aggressiveness masses. The first diagnostic model designed achieves area under curve (AUC) 0.871 in prospective...

10.1038/s41467-025-56784-z article EN cc-by-nc-nd Nature Communications 2025-02-06

Watermarking has been widely adopted for protecting the intellectual property (IP) of Deep Neural Networks (DNN) to defend unauthorized distribution. Unfortunately, studies have shown that popular data-poisoning DNN watermarking scheme via tedious model fine-tuning on a poisoned dataset (carefully-crafted sample-label pairs) is not efficient in tackling tasks challenging datasets and production-level protection. To address aforementioned limitation, this paper, we propose plug-and-play...

10.1145/3581783.3612331 article EN 2023-10-26

As deep neural networks (DNNs) play a critical role in various fields, the models themselves hence are becoming an important asset that needs to be protected. To achieve this, network fingerprint methods have been proposed. However, existing decision boundary by adversarial examples, which is not robust model modification and defenses. fill this gap, we propose method MetaFinger, fingerprints inner area of meta-training, rather than boundary. Specifically, first generate many shadow with DNN...

10.24963/ijcai.2022/109 article EN Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022-07-01

In recent years, generative adversarial networks (GANs) and its variants have achieved unprecedented success in image synthesis. They are widely adopted synthesizing facial images which brings potential security concerns to humans as the fakes spread fuel misinformation. However, robust detectors of these AI-synthesized fake faces still their infancy not ready fully tackle this emerging challenge. work, we propose a novel approach, named FakeSpotter, based on monitoring neuron behaviors spot...

10.48550/arxiv.1909.06122 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Sentiment classification has been broadly applied in real life, such as product recommendation and opinion-oriented analysis. Unfortunately, the widely employed sentiment systems based on deep neural networks (DNNs) are susceptible to adversarial attacks with imperceptible perturbations into legitimate texts (also called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">adversarial texts</i> ). Adversarial could cause erroneous outputs even...

10.1109/access.2021.3058278 article EN cc-by IEEE Access 2021-01-01

Objective To explore the care experiences of spouses as long-term and primary caregivers for disabled older adults in China. Methods A descriptive phenomenological method was used this study, well purposive convenient sampling. Semi-structured interviews were conducted with 15 spousal Guangdong, China, from March to December 2021. Interview audio-recordings transcribed verbatim data analyzed using Colaizzi's analysis method. Results We identified four themes data: motivation; sacrifices...

10.1177/17423953221148972 article EN Chronic Illness 2023-01-03

For the first time, two different types of electron traps are clearly identified in Ge nFETs with Type-A controlled by HfO <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</inf> layer thickness and Type-B Si growth induced segregation. Only responsible for mobility degradation they do not saturate stress while opposite applies to Type A. A PBTI model is proposed validated long term prediction.

10.1109/vlsit.2016.7573367 article EN 2016-06-01

With the unprecedented convenience brought by Apps on mobile devices, we are facing severe security attacks and privacy leakage caused them since they may stealthily access unclaimed or unneeded permissions for some purposes. Many works strive to discover these malicious apps using program analysis techniques, however, fail tell users why an app needs request permission from users' perspective. In this paper, leverage power of crowdsourced user reviews understand requests a permission. We...

10.1109/tmc.2019.2934441 article EN IEEE Transactions on Mobile Computing 2019-08-14
Coming Soon ...