- Privacy-Preserving Technologies in Data
- Stochastic Gradient Optimization Techniques
- Age of Information Optimization
- Cryptography and Data Security
- IoT and Edge/Fog Computing
- Mobile Crowdsensing and Crowdsourcing
- Internet Traffic Analysis and Secure E-voting
- Advanced Neural Network Applications
- Advanced Wireless Communication Technologies
- Adversarial Robustness in Machine Learning
- Wireless Communication Security Techniques
- Distributed Sensor Networks and Detection Algorithms
- Access Control and Trust
- Advanced Data Compression Techniques
- Face and Expression Recognition
- Statistical Methods and Inference
- Image and Signal Denoising Methods
- Neural Networks and Applications
- Microwave Imaging and Scattering Analysis
- Medical Image Segmentation Techniques
Hong Kong University of Science and Technology
2022-2024
University of Hong Kong
2022-2024
Hong Kong Polytechnic University
2023
Federated learning (FL) has emerged as a privacy-preserving paradigm that trains neural networks on edge devices without collecting data at central server. However, FL encounters an inherent challenge in dealing with non-independent and identically distributed (non-IID) among devices. To address this challenge, paper proposes hard feature matching synthesis (HFMDS) method to share auxiliary besides local models. Specifically, synthetic are generated by the essential class-relevant features...
Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine solutions, while preserving data privacy. Unfortunately, performance of FEEL may be compromised due limited training a single cluster. In this paper, we investigate novel framework FEEL, namely semi-decentralized federated (SD-FEEL). By allowing model aggregation across different clusters, SD-FEEL enjoys benefit reducing latency, improving by accessing richer...
Federated edge learning (FEEL) emerges as a privacy-preserving paradigm to effectively train deep models from the distributed data in 6G networks. Nevertheless, limited coverage of single server results an insufficient number participating client nodes, which may impair performance. In this paper, we investigate novel FEEL framework, namely <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">semi-decentralized federated learning</i> (SD-FEEL),...
Federated learning (FL) strives to enable collaborative training of machine models without centrally collecting clients' private data. Different from centralized training, the local datasets across clients in FL are non-independent and identically distributed (non-IID). In addition, data-owning may drop out process arbitrarily. These characteristics will significantly degrade performance. This paper proposes a Dropout-Resilient Secure Learning (DReS-FL) framework based on Lagrange coded...
Federated learning (FL) enables collaborative among decentralized clients while safeguarding the privacy of their local data. Existing studies on FL typically assume offline labeled data available at each client when training starts. Nevertheless, in practice often arrive a streaming fashion without ground-truth labels. Given expensive annotation cost, it is critical to identify subset informative samples for labeling clients. However, selecting locally accommodating global objective...
Federated learning (FL) is a popular privacy-preserving distributed training scheme, where multiple devices collaborate to train machine models by uploading local model updates. To improve communication efficiency, over-the-air computation (AirComp) has been applied FL, which leverages analog modulation harness the superposition property of radio waves such that numerous can upload their updates concurrently for aggregation. However, uplink channel noise incurs considerable aggregation...
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine framework, where many clients collaboratively train model by exchanging updates with parameter server instead of sharing their raw data. Nevertheless, FL training suffers from slow convergence and unstable performance due to stragglers caused the heterogeneous computational resources fluctuating communication rates. This paper proposes coded framework mitigate straggler issue, namely stochastic...
Federated learning (FL) has emerged as a secure paradigm for collaborative training among clients. Without data centralization, FL allows clients to share local information in privacy-preserving manner. This approach gained considerable attention, promoting numerous surveys summarize the related works. However, majority of these concentrate on methods that model parameters during process, while overlooking possibility sharing other forms. In this paper, we present systematic survey from new...
Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the updates need be collected at server. However, when being deployed mobile edge networks, may have unpredictable availability drop out of process, which hinders convergence FL. This paper tackles such critical challenge. Specifically, we first investigate classical FedAvg algorithm with arbitrary client dropouts. We find that common...
Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train machine model by sharing the updates instead of raw data with server. However, heterogeneous computational and communication resources give rise to stragglers that significantly decelerate process. To mitigate this issue, we propose novel FL framework named stochastic coded federated (SCFL) leverages computing techniques. In SCFL, before...
Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about model's generalization performance their local data. Due to data heterogeneity issue, asking all join single FL process may result in degradation. To investigate effectiveness of collaboration, first derive bound for each client when collaborating with others or...
Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the updates need be collected at server. However, when being deployed mobile edge networks, may have unpredictable availability drop out of process, which hinders convergence FL. This paper tackles such critical challenge. Specifically, we first investigate classical FedAvg algorithm with arbitrary client dropouts. We find that common...
Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed framework for mobile networks. In this work, we investigate novel semi-decentralized FEEL (SD-FEEL) architecture where multiple servers collaborate to incorporate more data from devices in training. Despite the low training latency enabled by fast aggregation, device heterogeneity computational resources deteriorates efficiency. This paper proposes an asynchronous algorithm overcome issue SD-FEEL, are...
We consider the distributed learning problem with data dispersed across multiple workers under orchestration of a central server. Asynchronous Stochastic Gradient Descent (SGD) has been widely explored in such setting to reduce synchronization overhead associated parallelization. However, performance asynchronous SGD algorithms often depends on bounded dissimilarity condition among workers' local data, that can drastically affect their efficiency when are highly heterogeneous. To overcome...
Federated learning (FL) enables collaborative among decentralized clients while safeguarding the privacy of their local data. Existing studies on FL typically assume offline labeled data available at each client when training starts. Nevertheless, in practice often arrive a streaming fashion without ground-truth labels. Given expensive annotation cost, it is critical to identify subset informative samples for labeling clients. However, selecting locally accommodating global objective...
Federated learning (FL) is an emerging distributed training scheme where edge devices collaboratively train a model by uploading updates instead of private data. To address the communication bottleneck, over-the-air (OTA) computation has been introduced to FL, which allows multiple upload their gradient concurrently for aggregation. However, OTA plagued error, critically affected device selection policy and impacts performance output model. In this paper, we propose probabilistic PO-FL,...
Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine solutions, while preserving data privacy. Unfortunately, performance of FEEL may be compromised due limited training a single cluster. In this paper, we investigate novel framework FEEL, namely semi-decentralized federated (SD-FEEL). By allowing model aggregation across different clusters, SD-FEEL enjoys benefit reducing latency, improving by accessing richer...
Federated edge learning (FEEL) has attracted much attention as a privacy-preserving paradigm to effectively incorporate the distributed data at network for training deep models. Nevertheless, limited coverage of single server results in an insufficient number participated client nodes, which may impair performance. In this paper, we investigate novel framework FEEL, namely semi-decentralized federated (SD-FEEL), where multiple servers are employed collectively coordinate large nodes. By...
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of central server. However, existence large number heterogeneous makes FL vulnerable various attacks, especially stealthy backdoor attack. Backdoor attack aims trick neural network misclassify data target label injecting specific triggers while keeping correct predictions on original training data. Existing works focus client-side attacks which try poison...
Federated learning (FL) is a popular privacy-preserving distributed training scheme, where multiple devices collaborate to train machine models by uploading local model updates. To improve communication efficiency, over-the-air computation (AirComp) has been applied FL, which leverages analog modulation harness the superposition property of radio waves such that numerous can upload their updates concurrently for aggregation. However, uplink channel noise incurs considerable aggregation...
Federated learning (FL) has emerged as a privacy-preserving paradigm that trains neural networks on edge devices without collecting data at central server. However, FL encounters an inherent challenge in dealing with non-independent and identically distributed (non-IID) among devices. To address this challenge, paper proposes hard feature matching synthesis (HFMDS) method to share auxiliary besides local models. Specifically, synthetic are generated by the essential class-relevant features...
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine framework, where many clients collaboratively train model by exchanging updates with parameter server instead of sharing their raw data. Nevertheless, FL training suffers from slow convergence and unstable performance due to stragglers caused the heterogeneous computational resources fluctuating communication rates. This paper proposes coded framework mitigate straggler issue, namely stochastic...
Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train machine model by sharing the updates instead of raw data with server. However, heterogeneous computational and communication resources give rise to stragglers that significantly decelerate process. To mitigate this issue, we propose novel FL framework named stochastic coded federated (SCFL) leverages computing techniques. In SCFL, before...