Junxiao Wang

ORCID: 0000-0001-7263-174X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Adversarial Robustness in Machine Learning
  • Domain Adaptation and Few-Shot Learning
  • Antenna Design and Analysis
  • Laser-induced spectroscopy and plasma
  • Stochastic Gradient Optimization Techniques
  • Topic Modeling
  • Advanced Neural Network Applications
  • Cryptography and Data Security
  • Advanced Antenna and Metasurface Technologies
  • Energy Harvesting in Wireless Networks
  • Geophysical Methods and Applications
  • Multimodal Machine Learning Applications
  • Conducting polymers and applications
  • Advanced Image and Video Retrieval Techniques
  • Advanced Sensor and Energy Harvesting Materials
  • Spinal Cord Injury Research
  • Age of Information Optimization
  • Mobile Crowdsensing and Crowdsourcing
  • Recommender Systems and Techniques
  • Music and Audio Processing
  • Explainable Artificial Intelligence (XAI)
  • 3D Printing in Biomedical Research
  • Access Control and Trust
  • Distributed and Parallel Computing Systems

Guangzhou University
2024

King Abdullah University of Science and Technology
2023-2024

Zhejiang Sci-Tech University
2021-2023

Jiangsu Maritime Institute
2023

Hong Kong Polytechnic University
2022-2023

Kootenay Association for Science & Technology
2023

Wuhan University
2023

Renmin Hospital of Wuhan University
2023

Xinjiang University
2023

Beijing Sport University
2023

We explore the problem of selectively forgetting categories from trained CNN classification models in federated learning (FL). Given that data used for training cannot be accessed globally FL, our insights probe deep into internal influence each channel. Through visualization feature maps activated by different channels, we observe channels have a varying contribution to image classification.

10.1145/3485447.3512222 article EN Proceedings of the ACM Web Conference 2022 2022-04-25

Federated Learning (FL) is susceptible to gradient leakage attacks, as recent studies show the feasibility of obtaining private training data on clients from publicly shared gradients. Existing work solves this problem by incorporating a series privacy protection mechanisms, such homomorphic encryption and local differential prevent leakage. However, these solutions either incur significant communication computation costs, or accuracy loss. In paper, we that sensitivity changes w.r.t. an...

10.1109/infocom48880.2022.9796841 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2022-05-02

Quick global aggregation of effective distributed parameters is crucial to federated learning (FL), which requires adequate bandwidth for communication and sufficient user data local training. Otherwise, FL may cost excessive training time convergence produce inaccurate models. In this paper, we propose a brand-new framework, PromptFL, that replaces the model with prompt training, i.e., let participants train prompts instead shared model, simultaneously achieve efficient on insufficient by...

10.1109/tmc.2023.3302410 article EN IEEE Transactions on Mobile Computing 2023-08-07

Multimodal learning (MML) aims to jointly exploit the common priors of different modalities compensate for their inherent limitations. However, existing MML methods often optimize a uniform objective modalities, leading notorious "modality imbalance" problem and counterproductive performance. To address problem, some modulate pace based on fused modality, which is dominated by better modality eventually results in limited improvement worse modal. features multimodal, we propose Prototypical...

10.1109/cvpr52729.2023.01918 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

Pre-trained vision-language models like CLIP show great potential in learning representations that capture latent characteristics of users. A recently proposed method called Contextual Optimization (CoOp) introduces the concept training prompt for adapting pre-trained models. Given lightweight nature this method, researchers have migrated paradigm from centralized to decentralized system innovate collaborative framework Federated Learning (FL). However, current FL mainly focuses on modeling...

10.1145/3543507.3583518 article EN Proceedings of the ACM Web Conference 2022 2023-04-26

The application of wearable intelligent systems toward human-computer interaction has received widespread attention. It is still desirable to conveniently promote health and monitor sports skills for disabled people. Here, a wireless sensing system (WISS) been developed, which includes two ports flexible triboelectric nanogenerator (WF-TENG) an upper computer digital signal receiving processing. WF-TENG port connected by the sensor printed circuit (FPC). Due its flexibility, can be freely...

10.1016/j.isci.2023.108126 article EN cc-by-nc-nd iScience 2023-10-05

The Right to be Forgotten gives a data owner the right revoke their from an entity storing it. In context of federated learning, requires that, in addition itself, any influence on FL model must disappear, process we call "federated unlearning." most straightforward and legitimate way implement unlearning is remove revoked retrain scratch. Yet computational time overhead associated with fully retraining models can prohibitively expensive. this article, take first step comprehensively...

10.1109/mnet.001.2200198 article EN IEEE Network 2022-09-01

The advent of self-powered arrays tribological nanogenerators (TENGs) that harvest mechanical energy for data collection has ushered in a promising avenue human motion monitoring. This emerging trend is poised to shape the future landscape biomechanical study. However, when we try monitor various regions foot across disparate environments simultaneously, it poses number problems, such as lack satisfactory waterproofing, suboptimal heat resistance, inaccurate monitoring capacity, and...

10.3390/electronics12153226 article EN Electronics 2023-07-26

Federated Learning (FL) has gained considerable attention recently, as it allows clients to cooperatively train a global machine learning model without sharing raw data. However, its performance can be compromised due the high heterogeneity in clients' local data distributions, commonly known Non-IID (non-independent and identically distributed). Moreover, collaboration among highly dissimilar exacerbates this degradation. Personalized FL seeks mitigate by enabling collaborate primarily with...

10.1109/tmc.2024.3396218 article EN IEEE Transactions on Mobile Computing 2024-05-02

Recent studies have shown that the training samples can be recovered from gradients, which are called Gradient Inversion (GradInv) attacks. However, there remains a lack of extensive surveys covering recent advances and thorough analysis this issue. In paper, we present comprehensive survey on GradInv, aiming to summarize cutting-edge research broaden horizons for different domains. Firstly, propose taxonomy GradInv attacks by characterizing existing into two paradigms: iteration-...

10.24963/ijcai.2022/791 article EN Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022-07-01

10.1016/j.jmbbm.2023.106246 article EN Journal of the mechanical behavior of biomedical materials/Journal of mechanical behavior of biomedical materials 2023-11-22

Quick global aggregation of effective distributed parameters is crucial to federated learning (FL), which requires adequate bandwidth for communication and sufficient user data local training. Otherwise, FL may cost excessive training time convergence produce inaccurate models. In this paper, we propose a brand-new framework, PromptFL, that replaces the model with prompt training, i.e., let participants train prompts instead shared model, simultaneously achieve efficient on insufficient by...

10.48550/arxiv.2208.11625 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Federated Learning (FL) has emerged as a privacy-preserving paradigm enabling collaborative model training among distributed clients. However, current FL methods operate under the closed-world assumption, i.e., all local data originates from global labeled dataset balanced across classes, which is often invalid for practical scenarios. In contrast, in many open-world settings, have been observed to exhibit heavy-tailed distributions, particularly realm of mobile computing and Internet Things...

10.1109/tmc.2024.3398052 article EN IEEE Transactions on Mobile Computing 2024-05-08

Federated Learning (FL) is an emerging paradigm that enables distributed users to collaboratively and iteratively train machine learning models without sharing their private data. Motivated by the effectiveness robustness of self-attention-based architectures, researchers are turning using pre-trained Transformers (i.e., foundation models) instead traditional convolutional neural networks in FL leverage excellent transfer capabilities. Despite recent progress, how Transformer play a role...

10.48550/arxiv.2211.08025 preprint EN other-oa arXiv (Cornell University) 2022-01-01

The theoretical simulation of orthogonal double pulse by hydrodynamics shows that the signal enhancement reheating mode is better when interval short, but, in contrast, pre-ablation better.

10.1039/d2ja00105e article EN Journal of Analytical Atomic Spectrometry 2022-01-01

Recent studies have shown that the training samples can be recovered from gradients, which are called Gradient Inversion (GradInv) attacks. However, there remains a lack of extensive surveys covering recent advances and thorough analysis this issue. In paper, we present comprehensive survey on GradInv, aiming to summarize cutting-edge research broaden horizons for different domains. Firstly, propose taxonomy GradInv attacks by characterizing existing into two paradigms: iteration-...

10.48550/arxiv.2206.07284 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Abstract We proposed a theoretical spatio-temporal imaging method, which was based on the thermal model of laser ablation and two-dimensional axisymmetric multi-species hydrodynamics model. By using intensity formula, integral spectral lines could be calculated corresponding images distribution drawn. Through further image processing such as normalization, determination minimum intensity, combination color filtering, relatively clear species in plasma obtained. Using above we simulated...

10.1088/2058-6272/ac401a article EN Plasma Science and Technology 2021-12-07

The recent success of pre-trained language models (PLMs) such as BERT has resulted in the development various beneficial database middlewares, including natural query interfaces and entity matching. This shift been greatly facilitated by extensive external knowledge PLMs. However, PLMs are often provided untrusted third parties, their lack standardization regulation poses significant security risks that have yet to be fully explored. paper investigates threats posed malicious these emerging...

10.1145/3580305.3599395 article EN Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2023-08-04

Major manufacturers and retailers are increasingly using RFID systems in supply-chain scenarios, where theft of goods during transport typically causes significant economic losses for the consumer. This paper studies how to achieve time-efficient secure integrity authentication problems systems. We start with a straightforward solution called SecAuth, which uses identity stored on reserved memory authenticate tags way. then propose time efficient KTAuth protocol, design verification chain...

10.1109/tmc.2022.3172486 article EN IEEE Transactions on Mobile Computing 2022-01-01

The large language models (LLMs) that everyone is using are not deployed locally. Users need to send relatively private and important data LLM when it. Handing over will cause people worry, especially now many have begun use deal with life work affairs. Such concerns cannot be easily dispelled by various guarantees agreements. However, LLMs often resource-intensive computationally demanding, making the transition from server-side device-side difficult because LLM's self-attention module...

10.1145/3662006.3662065 article EN other-oa 2024-06-03

Using pre-trained vision-language models like CLIP with federated training prompts has shown great potential in learning (FL) by offering significant benefits computation, communication, and privacy over existing frameworks. However, researches overlook the internal mechanisms underlying prompt tuning comply traditional context-unaware mechanism. Our experiments, on other hand, demonstrate that prompting is a data-efficient but data-sensitive paradigm, therefore, samples involved process...

10.1109/tmc.2024.3439864 article EN IEEE Transactions on Mobile Computing 2024-08-07
Coming Soon ...