Suo Chen

ORCID: 0000-0001-9410-3569
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Stochastic Gradient Optimization Techniques
  • Age of Information Optimization
  • Recommender Systems and Techniques
  • Caching and Content Delivery
  • Privacy, Security, and Data Protection
  • Internet Traffic Analysis and Secure E-voting
  • Advanced Graph Neural Networks
  • Mobile Crowdsensing and Crowdsourcing

University of Science and Technology of China
2022-2024

Suzhou University of Science and Technology
2022-2024

The emerging Federated Learning (FL) permits all workers (e.g., mobile devices) to cooperatively train a model using their local data at the network edge. In order avoid possible bottleneck of conventional parameter server architecture, decentralized federated learning (DFL) is developed on peer-to-peer (P2P) communication. DFL, exchanging among usually regarded as an atomic operation, which largely affects total bandwidth consumption during training. Given limited communication resource...

10.1109/tmc.2022.3221212 article EN IEEE Transactions on Mobile Computing 2022-11-10

Federated Learning (FL) has been widely adopted to process the enormous data in application scenarios like Edge Computing (EC). However, commonly-used synchronous mechanism FL may incur unacceptable waiting time for heterogeneous devices, leading a great strain on devices' constrained resources. In addition, alternative asynchronous is known suffer from model staleness, which will lead performance degradation of trained model, especially <italic xmlns:mml="http://www.w3.org/1998/Math/MathML"...

10.1109/tmc.2023.3307610 article EN IEEE Transactions on Mobile Computing 2023-08-22

The emerging paradigm of federated learning (FL) strives to enable devices cooperatively train models without exposing their raw data. In most cases, the data across are non-independently and identically distributed in FL. Thus, local trained over different distributions will inevitably deviate from global optima, which induces optimization inconsistency even hurts convergence. Moreover, resource-constrained with heterogeneous training capacities (e.g., computing communication) further slow...

10.1109/tpds.2023.3334398 article EN IEEE Transactions on Parallel and Distributed Systems 2023-11-20

The emerging Federated Learning (FL) permits all workers ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">e.g.</i> , mobile devices) to cooperatively train a model using their local data at the network edge. In order avoid possible bottleneck of conventional parameter server architecture, decentralized federated learning (DFL) is developed on peer-to-peer (P2P) communication. Non-IID issue key challenge in FL and will significantly degrade...

10.1109/tmc.2024.3367872 article EN IEEE Transactions on Mobile Computing 2024-02-20

Federated Learning (FL) has gained significant popularity as a means of handling large scale data in Edge Computing (EC) applications. Due to the frequent communication between edge devices and server, parameter server based framework for FL may suffer from bottleneck lead degraded training efficiency. As an alternative solution, Hierarchical (HFL), which leverages servers intermediaries perform model aggregation among proximity, comes into being. However, existing HFL solutions fail...

10.1145/3603781.3604232 article EN 2023-05-26
Coming Soon ...