Honglin Yuan

ORCID: 0000-0003-1844-8842
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Stochastic Gradient Optimization Techniques
  • Privacy-Preserving Technologies in Data
  • Sparse and Compressive Sensing Techniques
  • Distributed Control Multi-Agent Systems
  • Statistical Methods and Inference
  • 2D Materials and Applications
  • Advanced Bandit Algorithms Research
  • Mobile Crowdsensing and Crowdsourcing
  • Fractional Differential Equations Solutions
  • Advanced Graph Neural Networks
  • Advanced Optimization Algorithms Research
  • Age of Information Optimization

Stanford University
2019-2021

Federated learning and analytics are a distributed approach for collaboratively models (or statistics) from decentralized data, motivated by designed privacy protection. The process can be formulated as solving federated optimization problems, which emphasize communication efficiency, data heterogeneity, compatibility with system requirements, other constraints that not primary considerations in problem settings. This paper provides recommendations guidelines on formulating, designing,...

10.48550/arxiv.2107.06917 preprint EN other-oa arXiv (Cornell University) 2021-01-01

We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a principled acceleration of Averaging (FedAvg, also known as Local SGD) for distributed optimization. FedAc is the first provable FedAvg that improves convergence speed and communication efficiency on various types convex functions. For example, strongly smooth functions, when using $M$ workers, previous state-of-the-art analysis can achieve linear speedup in if given rounds synchronization, whereas only requires...

10.48550/arxiv.2006.08950 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Federated learning data is drawn from a distribution of distributions: clients are meta-distribution, and their local distributions. Thus generalization studies in federated should separate performance gaps unseen client (out-of-sample gap) distributions (participation gap). In this work, we propose framework for disentangling these gaps. Using framework, observe explain differences behavior across natural synthetic datasets, indicating that dataset synthesis strategy can be important...

10.48550/arxiv.2110.14216 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Federated Learning (FL) is a distributed learning paradigm that scales on-device collaboratively and privately. Standard FL algorithms such as FedAvg are primarily geared towards smooth unconstrained settings. In this paper, we study the Composite Optimization (FCO) problem, in which loss function contains non-smooth regularizer. Such problems arise naturally applications involve sparsity, low-rank, monotonicity, or more general constraints. We first show straightforward extensions of primal...

10.48550/arxiv.2011.08474 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Federated Learning (FL), a distributed learning paradigm that scales on-device collaboratively, has emerged as promising approach for decentralized AI applications. Local optimization methods such Averaging (FedAvg) are the most prominent FL Despite their simplicity and popularity, theoretical understanding of local is far from clear. This dissertation aims to advance foundation in following three directions. First, we establish sharp bounds FedAvg, popular algorithm Learning. We demonstrate...

10.48550/arxiv.2401.13216 preprint EN cc-by-nc-sa arXiv (Cornell University) 2024-01-01

Federated Averaging (FedAvg), also known as Local SGD, is one of the most popular algorithms in Learning (FL). Despite its simplicity and popularity, convergence rate FedAvg has thus far been undetermined. Even under simplest assumptions (convex, smooth, homogeneous, bounded covariance), best-known upper lower bounds do not match, it clear whether existing analysis captures capacity algorithm. In this work, we first resolve question by providing a bound for that matches bound, which shows...

10.48550/arxiv.2111.03741 preprint EN other-oa arXiv (Cornell University) 2021-01-01

We provide new gradient-based methods for efficiently solving a broad class of ill-conditioned optimization problems. consider the problem minimizing function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ which is implicitly decomposable as sum $m$ unknown non-interacting smooth, strongly convex functions and method solves this with number gradient evaluations that scales (up to logarithmic factors) product square-root condition numbers components. This complexity bound (which we prove nearly...

10.48550/arxiv.2111.03137 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...