- Privacy-Preserving Technologies in Data
- Cryptography and Data Security
- Stochastic Gradient Optimization Techniques
- Recommender Systems and Techniques
- Privacy, Security, and Data Protection
- Advanced Graph Neural Networks
- Internet Traffic Analysis and Secure E-voting
- Mobile Crowdsensing and Crowdsourcing
- Adversarial Robustness in Machine Learning
- Data Management and Algorithms
- Advanced Neural Network Applications
- Magnetic and transport properties of perovskites and related materials
- Brain Tumor Detection and Classification
- Video Analysis and Summarization
- Caching and Content Delivery
- Geochemistry and Elemental Analysis
- Speech and dialogue systems
- Human Pose and Action Recognition
- Machine Learning and ELM
- Machine Learning and Algorithms
- Imbalanced Data Classification Techniques
- Grief, Bereavement, and Mental Health
- Magnetic properties of thin films
- Autonomous Vehicle Technology and Safety
- Sports Analytics and Performance
Emory University
2024
Tsinghua University
2024
Renmin University of China
2019-2023
Beijing Normal University
2022
Nanjing University
2020
Federated Learning (FL) is a promising machine learning paradigm that enables the analyzer to train model without collecting users' raw data. To ensure privacy, differentially private federated has been intensively studied. The existing works are mainly based on curator or local of differential privacy. However, both them have pros and cons. allows greater accuracy but requires trusted analyzer. In where users randomize data before sending analyzer, not required limited. this work, by...
Federated learning (FL) is an important paradigm for training global models from decentralized data in a privacy-preserving way. Existing FL methods usually assume the model can be trained on any participating client. However, real applications, devices of clients are heterogeneous, and have different computing power. Although big like BERT achieved huge success AI, it difficult to apply them heterogeneous with weak clients. The straightforward solutions removing or using small fit all will...
News recommendation is critical for personalized news access. Most existing methods rely on centralized storage of users’ historical click behavior data, which may lead to privacy concerns and hazards. Federated Learning a privacy-preserving framework multiple clients collaboratively train models without sharing their private data. However, the computation communication cost directly learning many in federated way are unacceptable user clients. In this paper, we propose an efficient...
Although Split Federated Learning (SFL) effectively enables knowledge sharing among resource-constrained clients, it suffers from low training performance due to the neglect of data heterogeneity and catastrophic forgetting problems. To address these issues, we propose a novel SFL approach named MultiSFL, which adopts i) an effective multi-model aggregation mechanism alleviate gradient divergence caused by heterogeneous ii) replay strategy deal with problem. MultiSFL two servers (i.e., fed...
Federated recommendation can potentially alleviate the privacy concerns in collecting sensitive and personal data for training personalized systems. However, it suffers from a low quality when local serving is inapplicable due to resource limitation of querying clients required online serving. Furthermore, theoretically private solution both federated essential but still lacking. Naively applying differential (DP) two stages would fail achieve satisfactory trade-off between utility...
Federated Learning (FL) enables collaborative learning of large-scale distributed clients without data sharing. However, due to the disparity computing resources among massive mobile devices, performance traditional homogeneous model-based is seriously limited. On one hand, achieve model training in all diverse clients, systems can only use small low-performance models for learning. other devices with high cannot train a high-performance large their insufficient raw data. To address...
We use magnetic force microscopy (MFM) to study spatial uniformity of magnetization epitaxially grown MnBi2Te4 thin films. Compared films which exhibit no quantum anomalous Hall effect (QAH), with QAH are observed have more larger domain size. The evolution upon field sweeping indicates that the domains or nonuniformity originates from strong pinning inherent sample inhomogeneity. A direct correlation between resistivity and size has been established by analyzing a series without QAH. Our...
The pre-training and fine-tuning paradigm has demonstrated its effectiveness become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access pre-trained models, as anyone can publish without strict validation processes. However, a released model be privacy trap datasets if it is carefully designed. In this work, we propose PreCurious framework reveal new attack surface where attacker releases gets black-box final fine-tuned...
As the location-based applications are flourishing, we will witness soon a prodigious amount of spatial data be stored in public cloud with geometric range query as one most fundamental search functions. The rising demand outsourced is moving larger-scale datasets and wider-scope size. To protect confidentiality geographic information individuals, at server should preserved especially when they queried. While problem secure on encrypted has been extensively studied, current schemes far from...
Abstract We investigate the phased evolution and variation of South Asian monsoon resulting weathering intensity physical erosion in Himalaya–Karakoram Mountains since late Pliocene time ( c. 3.4 Ma) using a comprehensive approach. Neodymium strontium isotopic compositions single-grain zircon U–Pb age spectra reveal sources deposits east Arabian Sea, show combination from Himalaya Karakoram–Kohistan–Ladakh Mountains, with sediments Indian Peninsula such as Deccan Traps or Craton. interpret...
As massive data are produced from small gadgets, federated learning on mobile devices has become an emerging trend. In the setting, Stochastic Gradient Descent (SGD) been widely used in for various machine models. To prevent privacy leakages gradients that calculated users' sensitive data, local differential (LDP) considered as a guarantee SGD recently. However, existing solutions have dimension dependency problem: injected noise is substantially proportional to $d$. this work, we propose...
As the location-based applications ourishing, we will witness soon transferring of a prodigious amount data from local to public cloud. The rising demand for outsourced is moving toward wider geographical area with arbitrary distribution (i.e., dense or sparse) and query scope limited vast). In terms cloud risks, individual should be preserved when being queried, especially location information. Geometric range queries are one most fundamental search functions. However, existed works secure...
Data release, such as statistics of data distribution, in many analysis and machine learning tasks is needed, which poses significant risks user's privacy. Usually, to preserve privacy every individual, frequency estimation based on LDP (Local Differential Privacy) used replace the real distribution data. Unfortunately, when an individual sends values multiple times, leakage, i.e., same value problems may occur, along with other performance memory usage problem. To narrow these gaps,...
Federated Learning (FL) is a promising machine learning paradigm that enables the analyzer to train model without collecting users' raw data. To ensure privacy, differentially private federated has been intensively studied. The existing works are mainly based on \textit{curator model} or \textit{local of differential privacy. However, both them have pros and cons. curator allows greater accuracy but requires trusted analyzer. In local where users randomize data before sending analyzer, not...
Although enormous progress has been made by Deep Neural Networks (DNN) in basic perception tasks, they are long criticized for lack of reasoning quality and interpretability. Raven's Progressive Matrices (RPMs) standard tests assessing human Intelligence Quotient, also acting as a tool to evaluate Artificial Intelligence. Existing methods pure DNN combining difficult confirm DNN's ability logical reasoning. Hybrid algorithms, not DNN. use end-to-end training, while detached modules...
We consider a quasi-Bayesian method that combines frequentist estimation in the first stage and Bayesian estimation/inference approach second stage. The study is motivated by structural discrete choice models use control function methodology to correct for endogeneity bias. In this scenario, estimates using some parametric or nonparametric approach. equation stage, associated with certain complicated likelihood functions, can be more conveniently dealt This paper studies asymptotic...
The pre-training and fine-tuning paradigm has demonstrated its effectiveness become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access pre-trained models, as anyone can publish without strict validation processes. However, a released model be privacy trap datasets if it is carefully designed. In this work, we propose PreCurious framework reveal new attack surface where attacker releases gets black-box final fine-tuned...