Zihan Chen

ORCID: 0000-0003-0814-3391
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Stochastic Gradient Optimization Techniques
  • Wireless Communication Security Techniques
  • Cryptography and Data Security
  • Distributed Sensor Networks and Detection Algorithms
  • Age of Information Optimization
  • Recommender Systems and Techniques
  • Quantum Computing Algorithms and Architecture
  • Advanced Manufacturing and Logistics Optimization
  • Complexity and Algorithms in Graphs
  • Sentiment Analysis and Opinion Mining
  • BIM and Construction Integration
  • Indoor and Outdoor Localization Technologies
  • Machine Learning and Data Classification
  • Human Mobility and Location-Based Analysis
  • Optimization and Packing Problems
  • Advanced MIMO Systems Optimization
  • Cooperative Communication and Network Coding
  • Quantum Information and Cryptography
  • Advanced Text Analysis Techniques
  • Advanced Wireless Communication Technologies
  • Ideological and Political Education
  • Quantum many-body systems
  • Mental Health via Writing
  • Innovations in Education and Learning Technologies

Singapore University of Technology and Design
2021-2025

Hefei National Center for Physical Sciences at Nanoscale
2024

University of Science and Technology of China
2024

Beijing Academy of Quantum Information Sciences
2024

Beijing Language and Culture University
2023

National University of Singapore
2022

Ocean University of China
2022

Federated learning (FL) is a privacy-preserving distributed paradigm that enables clients to jointly train global model. In real-world FL implementations, client data could have label noise, and different vastly noise levels. Although there exist methods in centralized for tackling such do not perform well on heterogeneous settings, due the typically smaller sizes of datasets privacy requirements FL. this paper, we propose FedCorr, general multi-stage framework tackle FL, without making any...

10.1109/cvpr52688.2022.00994 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

The concept of hierarchical federated edge learning (H-FEEL) has been recently proposed as an enhancement model. Such a system generally consists three entities, i.e., the server, helpers, and clients, in which each helper collects trained gradients from clients nearby, aggregates them, sends result to server for global model update. Due limited communication resources, only portion helpers can be scheduled upload their aggregated round training. And that necessitates well-designed scheme...

10.1109/twc.2022.3144140 article EN IEEE Transactions on Wireless Communications 2022-01-27

Private 5G edge networks support secure and private service, spectrum flexibility, intelligence. In this paper, we aim to design a dynamic scheduling policy explore the flexibility for heterogeneous federated learning (FL) in networks. Particularly, FL is implemented with multiple communication rounds, each of which scheduled device receives global model from server, updates its local model, sends updated server aggregation. The heterogeneity comes unbalanced data sizes across devices...

10.1109/jstsp.2021.3126174 article EN IEEE Journal of Selected Topics in Signal Processing 2021-11-09

The rapid advancements in large Language models (LLMs) have significantly enhanced their reasoning capabilities, driven by various strategies such as multi-agent collaboration. However, unlike the well-established performance improvements achieved through scaling data and model size, of LLMs is more complex can even negatively impact performance, introducing new challenges alignment robustness. In this survey, we provide a comprehensive examination LLM reasoning, categorizing it into...

10.48550/arxiv.2504.02181 preprint EN arXiv (Cornell University) 2025-04-02

We study a distributed machine learning problem carried out by an edge server and multiple agents in wireless network. The objective is to minimize global function that sum of the agents’ local loss functions. And optimization conducted analog over-the-air model training. Specifically, each agent modulates its gradient onto set waveforms transmits simultaneously. From received signal extracts noisy aggregated which distorted channel fading interference, uses it update feedbacks all...

10.1109/jstsp.2021.3139231 article EN publisher-specific-oa IEEE Journal of Selected Topics in Signal Processing 2021-12-30

Download This Paper Open PDF in Browser Add to My Library Share: Permalink Using these links will ensure access this page indefinitely Copy URL DOI

10.2139/ssrn.4762093 preprint EN 2024-01-01

The extraction of Metal-Organic Frameworks (MOFs) synthesis conditions from literature text has been challenging but crucial for the logical design new MOFs with desirable functionality. recent advent large language models (LLMs) provides disruptively solution to this long-standing problem and latest researches have reported over 90% F1 in extracting correct literature. We argue paper that most existing practices LLMs stay primitive zero-shot learning, which could lead downgraded application...

10.48550/arxiv.2408.04665 preprint EN arXiv (Cornell University) 2024-08-06

Federated edge learning is a promising technology to deploy intelligence at the of wireless networks in privacy-preserving manner. Under such setting, multiple clients collaboratively train global generic model under coordination an server. But training efficiency often hindered by challenges arising from limited communication and data heterogeneity. In this paper, we present distributed paradigm that employs analog over-the-air computation alleviate bottleneck. Additionally, leverage...

10.1109/icassp49357.2023.10095533 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023-05-05

Federated Learning (FL), a promising privacy-preserving distributed learning paradigm, has been extensively applied in urban environmental prediction tasks of Mobile Edge Computing (MEC) by training global machine model without data sharing. However, it is hard for the shared to be well generalized among local edge servers, due statistical heterogeneity, especially real-world data. Besides, existing FL approaches may result excessive communication and computation overhead frequent...

10.1109/secon55815.2022.9918588 article EN 2022-09-20

Federated Long-Tailed Learning (Fed-LT), a paradigm wherein data collected from decentralized local clients manifests globally prevalent long-tailed distribution, has garnered considerable attention in recent times. In the context of Fed-LT, existing works have predominantly centered on addressing imbalance issue to enhance efficacy generic global model while neglecting performance at level. contrast, conventional Personalized (pFL) techniques are primarily devised optimize personalized...

10.48550/arxiv.2401.08977 preprint EN public-domain arXiv (Cornell University) 2024-01-01

Personalized federated learning (PFL) has been widely investigated to address the challenge of data heterogeneity, especially when a single generic model is inadequate in satisfying diverse performance requirements local clients simultaneously. Existing PFL methods are inherently based on idea that relations between global and personalized models captured by similarity weights. Such primarily either partitioning architecture into versus components, or modeling client relationships via To...

10.48550/arxiv.2401.17124 preprint EN arXiv (Cornell University) 2024-01-29

Random quantum circuit sampling serves as a benchmark to demonstrate computational advantage. Recent progress in classical algorithms, especially those based on tensor network methods, has significantly reduced the simulation time and challenged claim of first-generation advantage experiments. However, terms generating uncorrelated samples, solution energy consumption, previous experiments still underperform Sycamore processor. Here we report an energy-efficient algorithm, using 1432 GPUs...

10.1093/nsr/nwae317 article EN cc-by National Science Review 2024-09-12

Data possesses significant value as it fuels advancements in AI. However, protecting the privacy of data generated by end-user devices has become crucial. Federated Learning (FL) offers a solution preserving during training. FL brings model directly to User Equipments (UEs) for local training an access point (AP). The AP periodically aggregates trained parameters from UEs, enhancing and sending back them. due communication constraints, only subset UEs can update each global aggregation....

10.1109/cscn60443.2023.10453168 article EN 2023-11-06

Data privacy and class imbalance are the norm rather than exception in many machine learning tasks. Recent attempts have been launched to, on one side, address problem of from pervasive private data, other learn long-tailed data. However, both assumptions might hold practical applications, while an effective method to simultaneously alleviate issues is yet under development. In this paper, we focus with (LT) data distributions context popular privacy-preserved federated (FL) framework. We...

10.48550/arxiv.2206.14988 preprint EN cc-by arXiv (Cornell University) 2022-01-01

We demonstrate that merely analog transmissions and match filtering can realize the function of an edge server in federated learning (FL). Therefore, a network with massively distributed user equipments (UEs) achieve large-scale FL without server. also develop training algorithm allows UEs to continuously perform local computing being interrupted by global parameter uploading, which exploits full potential UEs' processing power. derive convergence rates for proposed schemes quantify their...

10.1109/tsp.2024.3352405 article EN IEEE Transactions on Signal Processing 2024-01-01

In this work, we present a polynomial-time quantum algorithm for solving the ground states of class classically hard Hamiltonians. The mechanism exponential speedup that appeared in our is different from all existing algorithms. idea to introduce mapping $f:\text{ }\rho\rightarrow |\rho\rangle$ use density matrices represent pure states. We show makes sense by giving an efficient method obtain information $|\rho\rangle$ measurements on $\rho$. Under mapping, Lindblad master equation (LME)...

10.48550/arxiv.2401.13946 preprint EN cc-by arXiv (Cornell University) 2024-01-01

Random quantum circuit sampling serves as a benchmark to demonstrate computational advantage. Recent progress in classical algorithms, especially those based on tensor network methods, has significantly reduced the simulation time and challenged claim of first-generation advantage experiments. However, terms generating uncorrelated samples, time-to-solution, energy consumption, previous experiments still underperform \textit{Sycamore} processor. Here we report an energy-efficient algorithm,...

10.48550/arxiv.2406.18889 preprint EN arXiv (Cornell University) 2024-06-27

Leveraging over-the-air computations for model aggregation is an effective approach to cope with the communication bottleneck in federated edge learning. By exploiting superposition properties of multi-access channels, this facilitates integrated design and computation, thereby enhancing system privacy while reducing implementation costs. However, inherent electromagnetic interference radio channels often exhibits heavy-tailed distributions, giving rise exceptionally strong noise globally...

10.48550/arxiv.2409.15100 preprint EN arXiv (Cornell University) 2024-09-23
Coming Soon ...