Yajie Wang

ORCID: 0000-0002-0962-4464
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Artificial Intelligence in Games
  • Anomaly Detection Techniques and Applications
  • Video Analysis and Summarization
  • Digital Games and Media
  • Reinforcement Learning in Robotics
  • Advanced Neural Network Applications
  • Privacy-Preserving Technologies in Data
  • Advanced Malware Detection Techniques
  • Sports Analytics and Performance
  • Educational Games and Gamification
  • Monetary Policy and Economic Impact
  • Facility Location and Emergency Management
  • Domain Adaptation and Few-Shot Learning
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Network Security and Intrusion Detection
  • Global Financial Crisis and Policies
  • Digital Media Forensic Detection
  • Wireless Signal Modulation Classification
  • Internet Traffic Analysis and Secure E-voting
  • Neural Networks and Applications
  • Complex Network Analysis Techniques
  • Multi-Criteria Decision Making
  • E-commerce and Technology Innovations
  • Market Dynamics and Volatility

Beijing Institute of Technology
2010-2025

Shenyang Aerospace University
2014-2024

Ministry of Industry and Information Technology
2023-2024

Harbin Engineering University
2023-2024

Dalian University
2024

Dalian University of Technology
2017-2024

Westlake University
2024

Shantou University
2023

Guiyang Medical University
2023

Beijing Information Science & Technology University
2023

10.1016/j.jnca.2020.102634 article EN Journal of Network and Computer Applications 2020-03-29

10.1016/j.jpolmod.2006.12.003 article EN Journal of Policy Modeling 2007-01-24

Adversarial transferability enables black-box attacks on unknown victim deep neural networks (DNNs), rendering viable in real-world scenarios. Current transferable create adversarial perturbation over the entire image, resulting excessive noise that overfit source model. Concentrating to dominant image regions are model-agnostic is crucial improving efficacy. However, limiting local spatial domain proves inadequate augmenting transferability. To this end, we propose a attack with...

10.1609/aaai.v38i6.28427 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Deep neural network (DNN) is applied widely in many applications and achieves state-of-the-art performance. However, DNN lacks transparency interpretability for users structure. Attackers can use this feature to embed trojan horses the structure, such as inserting a backdoor into DNN, so that learn both normal main task additional malicious tasks at same time. Besides, relies on data set training. tamper with training interfere process, attaching trigger input data. Because of defects...

10.1049/cje.2021.00.126 article EN cc-by-nc-sa Chinese Journal of Electronics 2022-03-01

10.1007/s11227-024-06414-0 article EN The Journal of Supercomputing 2024-08-28

This study proposed a rapid approach to determine the fuzzy number evaluate roof fall risk in coal mine. In assessments by theory, determination of numbers using triangular (TFN) incorporated into analytic hierarchy process is difficult and time-consuming task. A novel TFN was proposed. To reduce occurrence accidents, it necessary identify main factors assessments. nine-score table-type questionnaire adopted collected expert judgement, based on which language scales were determined, ranging...

10.1080/19475705.2023.2184670 article EN cc-by Geomatics Natural Hazards and Risk 2023-03-08

Deep neural networks (DNNs) are increasingly used as the critical component of applications, bringing high computational costs. Many practitioners host their models on third-party platforms. This practice exposes DNNs to risks: A third party hosting model may use a malicious deep learning framework implement backdoor attack. Our goal is develop realistic potential for attacks in We introduce threatening and realistically implementable attack that highly stealthy flexible. inject trojans by...

10.1109/tdsc.2022.3164073 article EN IEEE Transactions on Dependable and Secure Computing 2022-04-01

Federated learning is a distributed machine approach that enables multiple participants to collaboratively train model without sharing their data, thus preserving privacy. However, the decentralized nature of federated also makes it susceptible backdoor attacks, where malicious can embed hidden vulnerabilities within model. Addressing these threats efficiently and effectively crucial, especially given impracticality iterative resource-intensive detection methods in environments. This article...

10.1109/jiot.2024.3438150 article EN IEEE Internet of Things Journal 2024-08-16

Machine learning has made tremendous progress and applied to various critical practical applications. However, recent studies have shown that machine models are vulnerable malicious attackers, such as neural network backdoor triggering. A successful triggering behavior may cause serious consequences, allowing the attacker bypass identity verification directly enter system. In image classification, there is always only one target label triggered by trigger in previous works. The position of...

10.1002/int.22785 article EN International Journal of Intelligent Systems 2021-12-28

Abstract Using microbiomes to mitigate global plastic pollution is of paramount importance. Insect have garnered emerging interest for their ability biodegrade non-hydrolysable polymers. The larvae Spodoptera frugiperda , a globally prevalent migratory crop pest, are accidentally discovered consume polyvinyl chloride (PVC) films, highlighting the role gut microbiome. Following migration S. frugiperd in China, this study displays comprehensive geographical profile its larval microbiota and...

10.1101/2024.02.06.579071 preprint EN bioRxiv (Cold Spring Harbor Laboratory) 2024-02-06
Coming Soon ...