- Privacy-Preserving Technologies in Data
- Stochastic Gradient Optimization Techniques
- Wireless Communication Security Techniques
- Cryptography and Data Security
- Distributed Sensor Networks and Detection Algorithms
- Age of Information Optimization
- Recommender Systems and Techniques
- Quantum Computing Algorithms and Architecture
- Advanced Manufacturing and Logistics Optimization
- Complexity and Algorithms in Graphs
- Sentiment Analysis and Opinion Mining
- BIM and Construction Integration
- Indoor and Outdoor Localization Technologies
- Machine Learning and Data Classification
- Human Mobility and Location-Based Analysis
- Optimization and Packing Problems
- Advanced MIMO Systems Optimization
- Cooperative Communication and Network Coding
- Quantum Information and Cryptography
- Advanced Text Analysis Techniques
- Advanced Wireless Communication Technologies
- Ideological and Political Education
- Quantum many-body systems
- Mental Health via Writing
- Innovations in Education and Learning Technologies
Singapore University of Technology and Design
2021-2025
Hefei National Center for Physical Sciences at Nanoscale
2024
University of Science and Technology of China
2024
Beijing Academy of Quantum Information Sciences
2024
Beijing Language and Culture University
2023
National University of Singapore
2022
Ocean University of China
2022
Federated learning (FL) is a privacy-preserving distributed paradigm that enables clients to jointly train global model. In real-world FL implementations, client data could have label noise, and different vastly noise levels. Although there exist methods in centralized for tackling such do not perform well on heterogeneous settings, due the typically smaller sizes of datasets privacy requirements FL. this paper, we propose FedCorr, general multi-stage framework tackle FL, without making any...
The concept of hierarchical federated edge learning (H-FEEL) has been recently proposed as an enhancement model. Such a system generally consists three entities, i.e., the server, helpers, and clients, in which each helper collects trained gradients from clients nearby, aggregates them, sends result to server for global model update. Due limited communication resources, only portion helpers can be scheduled upload their aggregated round training. And that necessitates well-designed scheme...
Private 5G edge networks support secure and private service, spectrum flexibility, intelligence. In this paper, we aim to design a dynamic scheduling policy explore the flexibility for heterogeneous federated learning (FL) in networks. Particularly, FL is implemented with multiple communication rounds, each of which scheduled device receives global model from server, updates its local model, sends updated server aggregation. The heterogeneity comes unbalanced data sizes across devices...
The rapid advancements in large Language models (LLMs) have significantly enhanced their reasoning capabilities, driven by various strategies such as multi-agent collaboration. However, unlike the well-established performance improvements achieved through scaling data and model size, of LLMs is more complex can even negatively impact performance, introducing new challenges alignment robustness. In this survey, we provide a comprehensive examination LLM reasoning, categorizing it into...
We study a distributed machine learning problem carried out by an edge server and multiple agents in wireless network. The objective is to minimize global function that sum of the agents’ local loss functions. And optimization conducted analog over-the-air model training. Specifically, each agent modulates its gradient onto set waveforms transmits simultaneously. From received signal extracts noisy aggregated which distorted channel fading interference, uses it update feedbacks all...
Download This Paper Open PDF in Browser Add to My Library Share: Permalink Using these links will ensure access this page indefinitely Copy URL DOI
The extraction of Metal-Organic Frameworks (MOFs) synthesis conditions from literature text has been challenging but crucial for the logical design new MOFs with desirable functionality. recent advent large language models (LLMs) provides disruptively solution to this long-standing problem and latest researches have reported over 90% F1 in extracting correct literature. We argue paper that most existing practices LLMs stay primitive zero-shot learning, which could lead downgraded application...
Federated edge learning is a promising technology to deploy intelligence at the of wireless networks in privacy-preserving manner. Under such setting, multiple clients collaboratively train global generic model under coordination an server. But training efficiency often hindered by challenges arising from limited communication and data heterogeneity. In this paper, we present distributed paradigm that employs analog over-the-air computation alleviate bottleneck. Additionally, leverage...
Federated Learning (FL), a promising privacy-preserving distributed learning paradigm, has been extensively applied in urban environmental prediction tasks of Mobile Edge Computing (MEC) by training global machine model without data sharing. However, it is hard for the shared to be well generalized among local edge servers, due statistical heterogeneity, especially real-world data. Besides, existing FL approaches may result excessive communication and computation overhead frequent...
Federated Long-Tailed Learning (Fed-LT), a paradigm wherein data collected from decentralized local clients manifests globally prevalent long-tailed distribution, has garnered considerable attention in recent times. In the context of Fed-LT, existing works have predominantly centered on addressing imbalance issue to enhance efficacy generic global model while neglecting performance at level. contrast, conventional Personalized (pFL) techniques are primarily devised optimize personalized...
Personalized federated learning (PFL) has been widely investigated to address the challenge of data heterogeneity, especially when a single generic model is inadequate in satisfying diverse performance requirements local clients simultaneously. Existing PFL methods are inherently based on idea that relations between global and personalized models captured by similarity weights. Such primarily either partitioning architecture into versus components, or modeling client relationships via To...
Random quantum circuit sampling serves as a benchmark to demonstrate computational advantage. Recent progress in classical algorithms, especially those based on tensor network methods, has significantly reduced the simulation time and challenged claim of first-generation advantage experiments. However, terms generating uncorrelated samples, solution energy consumption, previous experiments still underperform Sycamore processor. Here we report an energy-efficient algorithm, using 1432 GPUs...
Data possesses significant value as it fuels advancements in AI. However, protecting the privacy of data generated by end-user devices has become crucial. Federated Learning (FL) offers a solution preserving during training. FL brings model directly to User Equipments (UEs) for local training an access point (AP). The AP periodically aggregates trained parameters from UEs, enhancing and sending back them. due communication constraints, only subset UEs can update each global aggregation....
Data privacy and class imbalance are the norm rather than exception in many machine learning tasks. Recent attempts have been launched to, on one side, address problem of from pervasive private data, other learn long-tailed data. However, both assumptions might hold practical applications, while an effective method to simultaneously alleviate issues is yet under development. In this paper, we focus with (LT) data distributions context popular privacy-preserved federated (FL) framework. We...
We demonstrate that merely analog transmissions and match filtering can realize the function of an edge server in federated learning (FL). Therefore, a network with massively distributed user equipments (UEs) achieve large-scale FL without server. also develop training algorithm allows UEs to continuously perform local computing being interrupted by global parameter uploading, which exploits full potential UEs' processing power. derive convergence rates for proposed schemes quantify their...
In this work, we present a polynomial-time quantum algorithm for solving the ground states of class classically hard Hamiltonians. The mechanism exponential speedup that appeared in our is different from all existing algorithms. idea to introduce mapping $f:\text{ }\rho\rightarrow |\rho\rangle$ use density matrices represent pure states. We show makes sense by giving an efficient method obtain information $|\rho\rangle$ measurements on $\rho$. Under mapping, Lindblad master equation (LME)...
Random quantum circuit sampling serves as a benchmark to demonstrate computational advantage. Recent progress in classical algorithms, especially those based on tensor network methods, has significantly reduced the simulation time and challenged claim of first-generation advantage experiments. However, terms generating uncorrelated samples, time-to-solution, energy consumption, previous experiments still underperform \textit{Sycamore} processor. Here we report an energy-efficient algorithm,...
Leveraging over-the-air computations for model aggregation is an effective approach to cope with the communication bottleneck in federated edge learning. By exploiting superposition properties of multi-access channels, this facilitates integrated design and computation, thereby enhancing system privacy while reducing implementation costs. However, inherent electromagnetic interference radio channels often exhibits heavy-tailed distributions, giving rise exceptionally strong noise globally...