- Privacy-Preserving Technologies in Data
- Cryptography and Data Security
- Stochastic Gradient Optimization Techniques
- Adversarial Robustness in Machine Learning
- Blockchain Technology Applications and Security
- Evolutionary Algorithms and Applications
- Advanced Data Storage Technologies
- Advanced Adaptive Filtering Techniques
- Parallel Computing and Optimization Techniques
- Full-Duplex Wireless Communications
- Satellite Communication Systems
- Control Systems and Identification
- Caching and Content Delivery
- Surface Roughness and Optical Measurements
- Advanced Neural Network Applications
- Ethics and Social Impacts of AI
- Radio Frequency Integrated Circuit Design
- Advanced Power Amplifier Design
- Distributed and Parallel Computing Systems
- Advanced Algorithms and Applications
- Advanced MIMO Systems Optimization
- Advanced MEMS and NEMS Technologies
Daegu Gyeongbuk Institute of Science and Technology
2024
Samsung (United States)
2023
Southern California University for Professional Studies
2019-2023
University of Southern California
2019-2023
Texas A&M University
2021-2022
Korea Advanced Institute of Science and Technology
2012
Federated learning (FL) is a rapidly growing research field in machine learning. However, existing FL libraries cannot adequately support diverse algorithmic development; inconsistent dataset and model usage make fair algorithm comparison challenging. In this work, we introduce FedML, an open library benchmark to facilitate development performance comparison. FedML supports three computing paradigms: on-device training for edge devices, distributed computing, single-machine simulation. also...
Federated learning is a distributed framework for training machine models over the data residing at mobile devices, while protecting privacy of individual users. A major bottleneck in scaling federated to large number users overhead secure model aggregation across many In particular, state-of-the-art protocols grows quadratically with this article, we propose first framework, named Turbo-Aggregate, that network N achieves O(NlogN), as opposed O(N <sup...
Secure federated learning is a privacy-preserving framework to improve machine models by training over large volumes of data collected mobile users. This achieved through an iterative process where, at each iteration, users update global model using their local datasets. Each user then masks its via random keys, and the masked are aggregated central server compute for next iteration. As updates protected masks, cannot observe true values. presents major challenge resilience against...
How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, fast scalable approach this critical problem. CodedPrivateML keeps both information-theoretically private, allowing efficient parallelization of training across distributed workers. characterize CodedPrivateML's privacy threshold prove its convergence for logistic (and linear) regression. Furthermore, via extensive experiments on Amazon EC2, we demonstrate that provides significant...
Large-scale deployments of low Earth orbit (LEO) satellites collect massive amount imageries and sensor data, which can empower machine learning (ML) to address global challenges such as real-time disaster navigation mitigation. However, it is often infeasible download all the high-resolution images train these ML models on ground because limited downlink bandwidth, sparse connectivity, regularization constraints imagery resolution. To challenges, we leverage Federated Learning (FL), where...
Secure aggregation is a critical component in federated learning (FL), which enables the server to learn aggregate model of users without observing their local models. Conventionally, secure algorithms focus only on ensuring privacy individual single training round. We contend that such designs can lead significant leakages over multiple rounds, due partial user selection/participation at each round FL. In fact, we show conventional random selection strategies FL leaking users' models within...
Secure model aggregation is a key component of federated learning (FL) that aims at protecting the privacy each user's individual while allowing for their global aggregation. It can be applied to any aggregation-based FL approach training or personalized model. Model needs also resilient against likely user dropouts in systems, making its design substantially more complex. State-of-the-art secure protocols rely on secret sharing random-seeds used mask generations users enable reconstruction...
How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, fast scalable approach this critical problem. CodedPrivateML keeps both information-theoretically private, allowing efficient parallelization of training across distributed workers. characterize CodedPrivateML's privacy threshold prove its convergence for logistic (and linear) regression. Furthermore, via extensive experiments on Amazon EC2, we demonstrate that provides significant...
We consider a collaborative learning scenario in which multiple data-owners wish to jointly train logistic regression model, while keeping their individual datasets private from the other parties. propose COPML, fully-decentralized training framework that achieves scalability and privacy-protection simultaneously. The key idea of COPML is securely encode distribute computation load effectively across many parties perform computations as well model updates distributed manner on encoded data....
Outsourcing deep neural networks (DNNs) inference tasks to an untrusted cloud raises data privacy and integrity concerns. While there are many techniques ensure for polynomial-based computations, DNNs involve non-polynomial computations. To address these challenges, several privacy-preserving verifiable have been proposed based on replacing the activation functions such as rectified linear unit (ReLU) function with polynomial functions. Such usually require polynomials integer coefficients...
Secure aggregation is a critical component in federated learning (FL), which enables the server to learn aggregate model of users without observing their local models. Conventionally, secure algorithms focus only on ensuring privacy individual single training round. We contend that such designs can lead significant leakages over multiple rounds, due partial user selection/participation at each round FL. In fact, we show conventional random selection strategies FL leaking users' models within...
Federated learning (FL) typically relies on synchronous training, which is slow due to stragglers. While asynchronous training handles stragglers efficiently, it does not ensure privacy the incompatibility with secure aggregation protocols. A buffered protocol known as FedBuff has been proposed recently bridges gap between and mitigate also simultaneously. allows users send their updates asynchronously while ensuring by storing in a trusted execution environment (TEE) enabled private buffer....
Federated learning is a distributed framework for training machine models over the data residing at mobile devices, while protecting privacy of individual users. A major bottleneck in scaling federated to large number users overhead secure model aggregation across many In particular, state-of-the-art protocols grows quadratically with this paper, we propose first framework, named Turbo-Aggregate, that network $N$ achieves $O(N\log{N})$, as opposed $O(N^2)$, tolerating up user dropout rate...
Secure federated learning is a privacy-preserving framework to improve machine models by training over large volumes of data collected mobile users. This achieved through an iterative process where, at each iteration, users update global model using their local datasets. Each user then masks its via random keys, and the masked are aggregated central server compute for next iteration. As protected masks, cannot observe true values. presents major challenge resilience against adversarial...
Existing auto-encoder (AE)-based channel state information (CSI) frameworks have focused on a specific configuration of user equipment (UE) and base station (BS), thus the input output sizes AE are fixed. However, in real-world scenario, may vary depending number antennas BS UE allocated resource block frequency dimension. A naive approach to support different is use multiple models, which impractical for due limited HW resources. In this paper, we propose universal framework that can...
A new digital predistortion (DPD) technique based on envelope feedback is proposed for linearization of power amplifiers (PAs). Unlike conventional DPD techniques that need frequency down converters (FDCs) in the path to recover complex PA output, does not an FDC. Instead it employs two detectors, estimating output and difference signal between input output. It shown can be estimated from feedbacks if sign phase distortion AM-PM characteristic remains same all magnitudes. Simulation results...
This study about data prefetching focuses on maximizing the performance of modern processors by hiding cache misses. paper suggests that improving prefetch coverage is an effective approach to achieve goal. work proposes employ two simple buffers, block offset buffer and address buffer, leverage coverage. The stores blocks are accessed recently, while prefetch-issued lately. For we propose adopt multiple lengths delta history in searching patterns, compared using a single length global...
Existing auto-encoder (AE)-based channel state information (CSI) frameworks have focused on a specific configuration of user equipment (UE) and base station (BS), thus the input output sizes AE are fixed. However, in real-world scenario, may vary depending number antennas BS UE allocated resource block frequency dimension. A naive approach to support different is use multiple models, which impractical for due limited HW resources. In this paper, we propose universal framework that can...