- Privacy-Preserving Technologies in Data
- Stochastic Gradient Optimization Techniques
- Cryptography and Data Security
- Cooperative Communication and Network Coding
- Advanced Wireless Communication Technologies
- Domain Adaptation and Few-Shot Learning
- Advanced Wireless Communication Techniques
- Wireless Communication Security Techniques
- Advanced Neural Network Applications
- Adversarial Robustness in Machine Learning
- Topic Modeling
- Satellite Communication Systems
- Smart Grid and Power Systems
- Recommender Systems and Techniques
- Neural Networks and Applications
- Advanced Memory and Neural Computing
- Brain Tumor Detection and Classification
- Power Line Communications and Noise
- Infrared Target Detection Methodologies
- Indoor and Outdoor Localization Technologies
- Privacy, Security, and Data Protection
- IoT and Edge/Fog Computing
- PAPR reduction in OFDM
- Advanced Image and Video Retrieval Techniques
- Electricity Theft Detection Techniques
Yonsei University
2024-2025
Purdue University West Lafayette
2023-2024
Nanjing University of Aeronautics and Astronautics
2023-2024
Korea Advanced Institute of Science and Technology
2017-2022
Tsinghua University
2020
Soongsil University
2013
To improve the efficiency of reinforcement learning, we propose a novel asynchronous federated learning framework termed AFedPG, which constructs global model through collaboration among $N$ agents using policy gradient (PG) updates. handle challenge lagged policies in settings, design delay-adaptive lookahead and normalized update techniques that can effectively heterogeneous arrival times gradients. We analyze theoretical convergence bound characterize advantage proposed algorithm terms...
While network coverage maps continue to expand, many devices located in remote areas remain unconnected terrestrial communication infrastructures, preventing them from getting access the associated data-driven services. In this paper, we propose a ground-to-satellite cooperative federated learning (FL) methodology facilitate machine service management over regions. Our orchestrates satellite constellations provide following key functions during FL: (i) processing data offloaded ground...
A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i.e., individual clients) and generalization unseen data) properties concurrently. Existing techniques in federated (FL) have encountered steep tradeoff between these objectives impose large computational requirements on edge devices during training inference. In this paper, we propose SplitGP, new split solution can simultaneously capture capabilities efficient...
Fine-tuning large language models (LLMs) on devices is attracting increasing interest. Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning to mitigate challenges associated device model sizes and data scarcity. Still, the heterogeneity of computational resources remains a critical bottleneck: while higher-rank modules generally enhance performance, varying capabilities constrain LoRA's feasible rank range. Existing approaches attempting resolve this issue...
While federated learning (FL) eliminates the transmission of raw data over a network, it is still vulnerable to privacy breaches from communicated model parameters. Differential (DP) often employed address such issues. However, impact DP on FL in multi-tier networks -- where hierarchical aggregations couple noise injection decisions at different tiers, and trust models are heterogeneous across subnetworks not well understood. To fill this gap, we develop \underline{M}ulti-Tier...
Compared to traditional machine learning models, recent large language models (LLMs) can exhibit multi-task-solving capabilities through multiple dialogues and multi-modal data sources. These unique characteristics of LLMs, beyond their size, make deployment more challenging during the inference stage. Specifically, (i) deploying LLMs on local devices faces computational, memory, energy resource issues, while (ii) them in cloud cannot guarantee real-time service incurs communication/usage...
We consider federated learning (FL) with multiple wireless edge servers having their own local coverage. focus on speeding up training in this increasingly practical setup. Our key idea is to utilize the clients located overlapping coverage areas among adjacent (ESs); model-downloading stage, receive models from different ESs, take average of received models, and then update averaged model data. These send updated ESs by broadcasting, which acts as bridges for sharing trained between...
Multimodal federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities. However, key challenges multimodal remain unaddressed, particularly heterogeneous network where: (i) the set of modalities collected by each client will be diverse, and (ii) communication limitations prevent from uploading all their locally trained modality models server. In this paper, we propose Federated with joint Modality Client selection...
We suggest a general framework for network-coded Practical Byzantine Fault Tolerant (PBFT) consensus enabling agreement among distributed nodes under attacks. The suggested protocol generalizes existing replication and sharding schemes which are frequently used in current blockchain systems. Using the proposed algorithm, it is possible to reach when available bandwidth considerably smaller on individual links compared that required conventional schemes. It shown there exists an upper bound...
In wireless distributed computing systems, mobile devices that are connected wirelessly to the Fog (e.g., small base stations) collaboratively solve a given computational task. Unfortunately, systems suffer from packet losses due severe channel fading. Moreover, device can drop out of system when leaving coverage master node in layer. We model this unreliability between and as erasure channel. When fails be detected at receiver, corresponding is retransmitted, which would significantly...
In this paper, we propose bi-directional cooperative non-orthogonal multiple access (NOMA). Compared to conventional NOMA, the main contributions of NOMA can be explained in two directions: 1) The proposed system is still efficient when channel gains scheduled users are almost same. 2) operates well without accurate state information (CSI) at base station (BS). a two-user scenario, closed-form ergodic capacity derived and it proven better than those other techniques. Based on capacity,...
Recent advances in large-scale distributed learning algorithms have enabled communication-efficient training via SignSGD. Unfortunately, a major issue continues to plague learning: namely, Byzantine failures may incur serious degradation accuracy. This paper proposes Election Coding, coding-theoretic framework guarantee Byzantine-robustness for SignSGD with Majority Vote, which uses minimum worker-master communication both directions. The suggested explores new information-theoretic limits...
The demand for intelligent services at the network edge has introduced several research challenges. One is need a machine learning architecture that achieves personalization (to individual clients) and generalization unseen data) properties concurrently across different applications. Another an inference strategy can satisfy resource latency constraints during testing-time. Existing techniques in federated have encountered steep trade-off between generalization, not explicitly considered...
Federated learning (FL) is a promising approach for solving multilingual tasks, potentially enabling clients with their own language-specific data to collaboratively construct high-quality neural machine translation (NMT) model. However, communication constraints in practical network systems present challenges exchanging large-scale NMT engines between FL parties. In this paper, we propose meta-learning-based adaptive parameter selection methodology, MetaSend, that improves the efficiency of...
Coded computation is a framework which provides redundancy in distributed computing systems to speed up large-scale tasks. Although most existing works assume error-free scenarios, the link failures are common current wired/wireless networks. In this paper, we consider straggler problem with failures, by modeling links between master node and worker nodes as packet erasure channels. We first analyze latency setting using an (n, k) maximum distance separable (MDS) code. Then, setup where...
Despite the huge success of object detection, training process still requires an immense amount labeled data. Active learning has been proposed as a practical solution, but existing works on active detection do not utilize concept epistemic uncertainty, which is important metric for capturing usefulness sample. Previous also pay little attention to relation between bounding boxes when computing informativeness image. In this paper, we propose new strategy that improves these two shortcomings...
This paper investigates spectral shaping of multi-carrier-modulation waveforms based on combination Nyquist windowing and subband filtering. The combined windowing/filtering allows simultaneous control both subcarrier spectra. When compared with the existing or filtering techniques under a fixed excess frame length constraint, proposed scheme offers reduced sensitivity to carrier frequency symbol timing offsets. Establishing an analytical tool error spectrum stack consisting signals...
Distributed learning plays a key role in reducing the training time of modern deep neural networks with massive datasets. In this article, we consider distributed problem where gradient computation is carried out over number computing devices at wireless edge. We propose hierarchical broadcast coding, provable coding-theoretic framework to speed up Our contributions are threefold. First, motivated by nature real-world edge systems, layered code which mitigates effects not only packet losses...