Yuchang Sun

ORCID: 0000-0001-7881-4723
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Stochastic Gradient Optimization Techniques
  • Age of Information Optimization
  • Cryptography and Data Security
  • IoT and Edge/Fog Computing
  • Mobile Crowdsensing and Crowdsourcing
  • Internet Traffic Analysis and Secure E-voting
  • Advanced Neural Network Applications
  • Advanced Wireless Communication Technologies
  • Adversarial Robustness in Machine Learning
  • Wireless Communication Security Techniques
  • Distributed Sensor Networks and Detection Algorithms
  • Access Control and Trust
  • Advanced Data Compression Techniques
  • Face and Expression Recognition
  • Statistical Methods and Inference
  • Image and Signal Denoising Methods
  • Neural Networks and Applications
  • Microwave Imaging and Scattering Analysis
  • Medical Image Segmentation Techniques

Hong Kong University of Science and Technology
2022-2024

University of Hong Kong
2022-2024

Hong Kong Polytechnic University
2023

Federated learning (FL) has emerged as a privacy-preserving paradigm that trains neural networks on edge devices without collecting data at central server. However, FL encounters an inherent challenge in dealing with non-independent and identically distributed (non-IID) among devices. To address this challenge, paper proposes hard feature matching synthesis (HFMDS) method to share auxiliary besides local models. Specifically, synthetic are generated by the essential class-relevant features...

10.1109/tmc.2024.3365295 article EN IEEE Transactions on Mobile Computing 2024-02-14

Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine solutions, while preserving data privacy. Unfortunately, performance of FEEL may be compromised due limited training a single cluster. In this paper, we investigate novel framework FEEL, namely semi-decentralized federated (SD-FEEL). By allowing model aggregation across different clusters, SD-FEEL enjoys benefit reducing latency, improving by accessing richer...

10.1109/wcnc51071.2022.9771904 article EN 2022 IEEE Wireless Communications and Networking Conference (WCNC) 2022-04-10

Federated edge learning (FEEL) emerges as a privacy-preserving paradigm to effectively train deep models from the distributed data in 6G networks. Nevertheless, limited coverage of single server results an insufficient number participating client nodes, which may impair performance. In this paper, we investigate novel FEEL framework, namely <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">semi-decentralized federated learning</i> (SD-FEEL),...

10.1109/tnsm.2023.3252818 article EN IEEE Transactions on Network and Service Management 2023-03-06

Federated learning (FL) strives to enable collaborative training of machine models without centrally collecting clients' private data. Different from centralized training, the local datasets across clients in FL are non-independent and identically distributed (non-IID). In addition, data-owning may drop out process arbitrarily. These characteristics will significantly degrade performance. This paper proposes a Dropout-Resilient Secure Learning (DReS-FL) framework based on Lagrange coded...

10.48550/arxiv.2210.02680 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Federated learning (FL) enables collaborative among decentralized clients while safeguarding the privacy of their local data. Existing studies on FL typically assume offline labeled data available at each client when training starts. Nevertheless, in practice often arrive a streaming fashion without ground-truth labels. Given expensive annotation cost, it is critical to identify subset informative samples for labeling clients. However, selecting locally accommodating global objective...

10.1609/aaai.v39i19.34287 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2025-04-11

Federated learning (FL) is a popular privacy-preserving distributed training scheme, where multiple devices collaborate to train machine models by uploading local model updates. To improve communication efficiency, over-the-air computation (AirComp) has been applied FL, which leverages analog modulation harness the superposition property of radio waves such that numerous can upload their updates concurrently for aggregation. However, uplink channel noise incurs considerable aggregation...

10.1109/twc.2023.3336277 article EN IEEE Transactions on Wireless Communications 2024-07-01

Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine framework, where many clients collaboratively train model by exchanging updates with parameter server instead of sharing their raw data. Nevertheless, FL training suffers from slow convergence and unstable performance due to stragglers caused the heterogeneous computational resources fluctuating communication rates. This paper proposes coded framework mitigate straggler issue, namely stochastic...

10.1109/isit50566.2022.9834445 article EN 2022 IEEE International Symposium on Information Theory (ISIT) 2022-06-26

Federated learning (FL) has emerged as a secure paradigm for collaborative training among clients. Without data centralization, FL allows clients to share local information in privacy-preserving manner. This approach gained considerable attention, promoting numerous surveys summarize the related works. However, majority of these concentrate on methods that model parameters during process, while overlooking possibility sharing other forms. In this paper, we present systematic survey from new...

10.48550/arxiv.2307.10655 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the updates need be collected at server. However, when being deployed mobile edge networks, may have unpredictable availability drop out of process, which hinders convergence FL. This paper tackles such critical challenge. Specifically, we first investigate classical FedAvg algorithm with arbitrary client dropouts. We find that common...

10.1109/tmc.2023.3338021 article EN IEEE Transactions on Mobile Computing 2023-11-30

Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train machine model by sharing the updates instead of raw data with server. However, heterogeneous computational and communication resources give rise to stragglers that significantly decelerate process. To mitigate this issue, we propose novel FL framework named stochastic coded federated (SCFL) leverages computing techniques. In SCFL, before...

10.1109/twc.2023.3334732 article EN IEEE Transactions on Wireless Communications 2023-11-30

Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about model's generalization performance their local data. Due to data heterogeneity issue, asking all join single FL process may result in degradation. To investigate effectiveness of collaboration, first derive bound for each client when collaborating with others or...

10.48550/arxiv.2401.13236 preprint EN other-oa arXiv (Cornell University) 2024-01-01

Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the updates need be collected at server. However, when being deployed mobile edge networks, may have unpredictable availability drop out of process, which hinders convergence FL. This paper tackles such critical challenge. Specifically, we first investigate classical FedAvg algorithm with arbitrary client dropouts. We find that common...

10.48550/arxiv.2306.12212 preprint EN public-domain arXiv (Cornell University) 2023-01-01

Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed framework for mobile networks. In this work, we investigate novel semi-decentralized FEEL (SD-FEEL) architecture where multiple servers collaborate to incorporate more data from devices in training. Despite the low training latency enabled by fast aggregation, device heterogeneity computational resources deteriorates efficiency. This paper proposes an asynchronous algorithm overcome issue SD-FEEL, are...

10.1109/icc45855.2022.9839045 article EN ICC 2022 - IEEE International Conference on Communications 2022-05-16

We consider the distributed learning problem with data dispersed across multiple workers under orchestration of a central server. Asynchronous Stochastic Gradient Descent (SGD) has been widely explored in such setting to reduce synchronization overhead associated parallelization. However, performance asynchronous SGD algorithms often depends on bounded dissimilarity condition among workers' local data, that can drastically affect their efficiency when are highly heterogeneous. To overcome...

10.48550/arxiv.2405.16966 preprint EN arXiv (Cornell University) 2024-05-27

Federated learning (FL) enables collaborative among decentralized clients while safeguarding the privacy of their local data. Existing studies on FL typically assume offline labeled data available at each client when training starts. Nevertheless, in practice often arrive a streaming fashion without ground-truth labels. Given expensive annotation cost, it is critical to identify subset informative samples for labeling clients. However, selecting locally accommodating global objective...

10.48550/arxiv.2412.08138 preprint EN arXiv (Cornell University) 2024-12-11

Federated learning (FL) is an emerging distributed training scheme where edge devices collaboratively train a model by uploading updates instead of private data. To address the communication bottleneck, over-the-air (OTA) computation has been introduced to FL, which allows multiple upload their gradient concurrently for aggregation. However, OTA plagued error, critically affected device selection policy and impacts performance output model. In this paper, we propose probabilistic PO-FL,...

10.1109/icct59356.2023.10419829 article EN 2023-10-20

Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine solutions, while preserving data privacy. Unfortunately, performance of FEEL may be compromised due limited training a single cluster. In this paper, we investigate novel framework FEEL, namely semi-decentralized federated (SD-FEEL). By allowing model aggregation across different clusters, SD-FEEL enjoys benefit reducing latency, improving by accessing richer...

10.48550/arxiv.2104.12678 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Federated edge learning (FEEL) has attracted much attention as a privacy-preserving paradigm to effectively incorporate the distributed data at network for training deep models. Nevertheless, limited coverage of single server results in an insufficient number participated client nodes, which may impair performance. In this paper, we investigate novel framework FEEL, namely semi-decentralized federated (SD-FEEL), where multiple servers are employed collectively coordinate large nodes. By...

10.48550/arxiv.2112.10313 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of central server. However, existence large number heterogeneous makes FL vulnerable various attacks, especially stealthy backdoor attack. Backdoor attack aims trick neural network misclassify data target label injecting specific triggers while keeping correct predictions on original training data. Existing works focus client-side attacks which try poison...

10.48550/arxiv.2305.01267 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Federated learning (FL) is a popular privacy-preserving distributed training scheme, where multiple devices collaborate to train machine models by uploading local model updates. To improve communication efficiency, over-the-air computation (AirComp) has been applied FL, which leverages analog modulation harness the superposition property of radio waves such that numerous can upload their updates concurrently for aggregation. However, uplink channel noise incurs considerable aggregation...

10.48550/arxiv.2305.16854 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Federated learning (FL) has emerged as a privacy-preserving paradigm that trains neural networks on edge devices without collecting data at central server. However, FL encounters an inherent challenge in dealing with non-independent and identically distributed (non-IID) among devices. To address this challenge, paper proposes hard feature matching synthesis (HFMDS) method to share auxiliary besides local models. Specifically, synthetic are generated by the essential class-relevant features...

10.48550/arxiv.2308.04761 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine framework, where many clients collaboratively train model by exchanging updates with parameter server instead of sharing their raw data. Nevertheless, FL training suffers from slow convergence and unstable performance due to stragglers caused the heterogeneous computational resources fluctuating communication rates. This paper proposes coded framework mitigate straggler issue, namely stochastic...

10.48550/arxiv.2201.10092 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train machine model by sharing the updates instead of raw data with server. However, heterogeneous computational and communication resources give rise to stragglers that significantly decelerate process. To mitigate this issue, we propose novel FL framework named stochastic coded federated (SCFL) leverages computing techniques. In SCFL, before...

10.48550/arxiv.2211.04132 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...