- Privacy-Preserving Technologies in Data
- Stochastic Gradient Optimization Techniques
- Age of Information Optimization
- Recommender Systems and Techniques
- Caching and Content Delivery
- Privacy, Security, and Data Protection
- Internet Traffic Analysis and Secure E-voting
- Advanced Graph Neural Networks
- Mobile Crowdsensing and Crowdsourcing
University of Science and Technology of China
2022-2024
Suzhou University of Science and Technology
2022-2024
The emerging Federated Learning (FL) permits all workers (e.g., mobile devices) to cooperatively train a model using their local data at the network edge. In order avoid possible bottleneck of conventional parameter server architecture, decentralized federated learning (DFL) is developed on peer-to-peer (P2P) communication. DFL, exchanging among usually regarded as an atomic operation, which largely affects total bandwidth consumption during training. Given limited communication resource...
Federated Learning (FL) has been widely adopted to process the enormous data in application scenarios like Edge Computing (EC). However, commonly-used synchronous mechanism FL may incur unacceptable waiting time for heterogeneous devices, leading a great strain on devices' constrained resources. In addition, alternative asynchronous is known suffer from model staleness, which will lead performance degradation of trained model, especially <italic xmlns:mml="http://www.w3.org/1998/Math/MathML"...
The emerging paradigm of federated learning (FL) strives to enable devices cooperatively train models without exposing their raw data. In most cases, the data across are non-independently and identically distributed in FL. Thus, local trained over different distributions will inevitably deviate from global optima, which induces optimization inconsistency even hurts convergence. Moreover, resource-constrained with heterogeneous training capacities (e.g., computing communication) further slow...
The emerging Federated Learning (FL) permits all workers ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">e.g.</i> , mobile devices) to cooperatively train a model using their local data at the network edge. In order avoid possible bottleneck of conventional parameter server architecture, decentralized federated learning (DFL) is developed on peer-to-peer (P2P) communication. Non-IID issue key challenge in FL and will significantly degrade...
Federated Learning (FL) has gained significant popularity as a means of handling large scale data in Edge Computing (EC) applications. Due to the frequent communication between edge devices and server, parameter server based framework for FL may suffer from bottleneck lead degraded training efficiency. As an alternative solution, Hierarchical (HFL), which leverages servers intermediaries perform model aggregation among proximity, comes into being. However, existing HFL solutions fail...