Huy Q. Le

ORCID: 0009-0007-8342-7614
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Stochastic Gradient Optimization Techniques
  • Advanced Graph Neural Networks
  • Mobile Crowdsensing and Crowdsourcing
  • Domain Adaptation and Few-Shot Learning
  • Blockchain Technology Applications and Security
  • Cell Image Analysis Techniques
  • Generative Adversarial Networks and Image Synthesis
  • AI in cancer detection
  • Advanced Neural Network Applications
  • Recommender Systems and Techniques
  • Age of Information Optimization
  • Topic Modeling
  • Vehicular Ad Hoc Networks (VANETs)
  • Privacy, Security, and Data Protection
  • Internet Traffic Analysis and Secure E-voting
  • Smart Grid Security and Resilience
  • IoT Networks and Protocols
  • IoT and Edge/Fog Computing
  • Advanced Wireless Communication Technologies
  • Access Control and Trust

Kyung Hee University
2021-2024

Data collecting and sharing have been widely accepted adopted to improve the performance of deep learning models in almost every field. Nevertheless, medical field, data patients can raise several critical issues, such as privacy security or even legal issues. Synthetic images proposed overcome challenges; these synthetic are generated by distribution realistic but completely different from them so that they be shared used across institutions. Currently, diffusion model (DM) has gained lots...

10.1109/icoin56518.2023.10049010 article EN 2022 International Conference on Information Networking (ICOIN) 2023-01-11

Federated learning-assisted edge intelligence enables privacy protection in modern intelligent services. However, not independent and identically distributed (non-IID) distribution among clients can impair the local model performance. The existing single prototype-based strategy represents a class by using mean of feature space. spaces are usually clustered, prototype may represent well. Motivated this, this article proposes multiprototype federated contrastive learning approach (MP-FedCL)...

10.1109/jiot.2023.3320250 article EN IEEE Internet of Things Journal 2023-09-28

Federated Learning (FL) has emerged as a decentralized machine learning technique, allowing clients to train global model collaboratively without sharing private data. However, most FL studies ignore the crucial challenge of heterogeneous domains where each client distinct feature distribution, which is common in real-world scenarios. Prototype learning, leverages mean vectors within same classes, become prominent solution for federated under domain skew. existing prototype methods only...

10.48550/arxiv.2501.08521 preprint EN arXiv (Cornell University) 2025-01-14

This paper aims to improve the robustness of a small global model while maintaining clean accuracy under adversarial attacks and non-IID challenges in federated learning. By leveraging concise knowledge embedded class probabilities from pre-trained for both image classification, we propose Pre-trained Model-guided Adversarial Federated Learning (PM-AFL) training paradigm. paradigm integrates vanilla mixture distillation effectively balance promoting local models learn diverse data....

10.48550/arxiv.2501.15257 preprint EN arXiv (Cornell University) 2025-01-25

Semantic communication (SemCom) is a phenomenal technology to break through the Shannon paradigm. It considered key enabler for beyond fifth generation of mobile networks (B5G) and its applications. In SemCom, transmitting data contain semantic information rather than symbols or bits, regardless meaning. this paper, we propose novel system model Augmented Reality (AR)-based services in wireless networks. which, instead whole input multi-access edge computing (MEC) processing, end-users (UEs)...

10.1109/icoin56518.2023.10049026 article EN 2022 International Conference on Information Networking (ICOIN) 2023-01-11

Semantic communication has emerged as a pillar for the next generation of systems due to its capabilities in alleviating data redundancy. Most semantic are built upon advanced deep learning models whose training performance heavily relies on availability. Existing studies often make unrealistic assumptions readily accessible source, where practice, is mainly created client side. This circumstance limits transmission privacy concerns, which necessary conventional centralized schemes. To...

10.1109/tvt.2024.3401140 article EN IEEE Transactions on Vehicular Technology 2024-05-15

Federated Learning (FL) has been proposed as a decentralized machine learning system where multiple clients jointly train the model without sharing private data. In FL, statistical heterogeneity among devices become crucial challenge, which can cause degradation in generalization performance. Previous FL approaches have proven that leveraging proximal regularization at local training process alleviate divergence of parameter aggregation from biased models. this work, to address issues...

10.1109/icoin56518.2023.10049011 article EN 2022 International Conference on Information Networking (ICOIN) 2023-01-11

As a distributed machine learning technique, federated (FL) requires clients to collaboratively train shared model with an edge server without leaking their local data. However, the heterogeneous data distribution among often leads decrease in performance. To tackle this issue, paper introduces prototype-based regularization strategy address heterogeneity distribution. Specifically, process involves aggregating prototypes from generate global prototype, which is then sent back individual...

10.48550/arxiv.2307.10575 preprint EN cc-by-nc-nd arXiv (Cornell University) 2023-01-01

Semantic communication has emerged as a pillar for the next generation of systems due to its capabilities in alleviating data redundancy. Most semantic are built upon advanced deep learning models whose training performance heavily relies on availability. Existing studies often make unrealistic assumptions readily accessible source, where practice, is mainly created client side. Due privacy and security concerns, transmission restricted, which necessary conventional centralized schemes. To...

10.48550/arxiv.2310.13236 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Nowadays, Federated Learning has emerged as the prominent collaborative learning approach among multiple machine techniques. This framework enables communication-efficient and privacy-preserving solution that a group of users interacts with server to collaboratively train powerful global model without exchanging users' raw data. However, federated might face significant challenge high communication cost when huge parameters. Moreover, training such large on devices is an obstacle under...

10.23919/apnoms52696.2021.9562670 article EN 2021-09-08

Federated learning is a distributed system that addresses the difficulty such as communication overhead and private information in machine while maintaining high performance. However, learners have to dedicate their resources improving global model, which not likely happen voluntarily. This motivated us design an incentive mechanism for users (data owners) actively participate FL processes. In this paper, we consider multiple co-existing service providers (FLSPs) with need train models data...

10.23919/apnoms56106.2022.9919976 article EN 2022 23rd Asia-Pacific Network Operations and Management Symposium (APNOMS) 2022-09-28
Coming Soon ...