Xiangrui Xu

ORCID: 0000-0003-3759-4706
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Adversarial Robustness in Machine Learning
  • Cryptography and Data Security
  • Blockchain Technology Applications and Security
  • Internet Traffic Analysis and Secure E-voting
  • Advanced Neural Network Applications
  • Stochastic Gradient Optimization Techniques
  • Image Retrieval and Classification Techniques
  • Generative Adversarial Networks and Image Synthesis
  • Advanced Image Processing Techniques
  • Advanced Steganography and Watermarking Techniques
  • Complexity and Algorithms in Graphs
  • Digital Media Forensic Detection
  • Machine Learning in Healthcare
  • Advanced Data Compression Techniques
  • Network Security and Intrusion Detection
  • Spam and Phishing Detection
  • Cloud Computing and Resource Management
  • Music and Audio Processing
  • Robotics and Automated Systems
  • Access Control and Trust
  • Anomaly Detection Techniques and Applications
  • Cloud Data Security Solutions
  • Speech Recognition and Synthesis
  • Opportunistic and Delay-Tolerant Networks

Beijing Jiaotong University
2022-2025

Northwestern Polytechnical University
2024

Old Dominion University
2024

Wuhan Polytechnic University
2020-2021

Abstract Empirical attacks on Federated Learning (FL) systems indicate that FL is fraught with numerous attack surfaces throughout the execution. These can not only cause models to fail in specific tasks, but also infer private information. While previous surveys have identified risks, listed methods available literature or provided a basic taxonomy classify them, they mainly focused risks training phase of FL. In this work, we survey threats, and defenses whole process three phases,...

10.1186/s42400-021-00105-6 article EN cc-by Cybersecurity 2022-02-02

Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability protect privacy. While existing data attacks have shown some effective performance, prior arts rely on different strong assumptions guide the process. In this work, we propose novel Conditional Generative Instance Reconstruction Attack (CGIR attack) that drops all these assumptions. Specifically, batch label inference in non-IID FL scenarios, where multiple images...

10.1109/tdsc.2022.3228302 article EN IEEE Transactions on Dependable and Secure Computing 2022-12-12

Deep classification models have been widely utilized in Vehicular Road Cooperation. However, previous work indicates that deep are vulnerable to the privacy risks of Membership Inference Attacks (MIAs). Most existing MIAs is based on two different assumptions. One assumes adversary-own shadow datasets with aligned tasks and distributions as private datasets, while this assumption necessitates adversary knows datasets. The other distinct from requires boundaries between members non-members...

10.1109/jiot.2024.3380642 article EN IEEE Internet of Things Journal 2024-03-22

The power of deep learning and the enormous effort money required to build a model makes stealing them hugely worthwhile highly lucrative endeavor. Worse still, theft requires little more than high-school understanding computer functions, which ensures healthy vibrant black market full choice for any would-be pirate. As such, estimating how many neural network models are likely be illegally reproduced distributed in future is almost impossible. Therefore, we propose an embedded `identity...

10.1109/access.2020.2998784 article EN cc-by IEEE Access 2020-01-01

Deep neural network (DNN) with the state of art performance has emerged as a viable and lucrative business service. However, those impressive performances require large number computational resources, which comes at high cost for model creators. The necessity protecting DNN models from illegal reproducing distribution appears salient now. Recently, trigger-set watermarking, breaking white-box restriction, relying on adversarial training pre-defined (incorrect) labels crafted inputs,...

10.48550/arxiv.1911.08053 preprint EN other-oa arXiv (Cornell University) 2019-01-01

The prevalent use of Transformer-like models, exemplified by ChatGPT in modern language processing applications, underscores the critical need for enabling private inference essential many cloud-based services reliant on such models. However, current privacy-preserving frameworks impose significant communication burden, especially non-linear computation Transformer model. In this paper, we introduce a novel plug-in method Comet to effectively reduce cost without compromising performance. We...

10.48550/arxiv.2405.17485 preprint EN arXiv (Cornell University) 2024-05-24

Running multi-task DNNs on mobiles is an emerging trend for various applications like autonomous driving and mobile NLP. Mobile are often compressed to fit the limited resources thus suffer from degraded accuracy generalizability due data drift. DNN evolution, e.g., continuous learning domain adaptation, has been demonstrated effective in overcoming these issues, mostly single-task DNN, leaving evolution important yet open challenge. To fill up this gap, we propose AdaBridge, which exploits...

10.48550/arxiv.2407.00016 preprint EN arXiv (Cornell University) 2024-05-02

The rise of mobile devices equipped with numerous sensors, such as LiDAR and cameras, has spurred the adoption multi-modal deep intelligence for distributed sensing tasks, smart cabins driving assistance. However, arrival times sensory data vary due to modality size network dynamics, which can lead delays (if waiting slower data) or accuracy decline inference proceeds without waiting). Moreover, diversity dynamic nature systems exacerbate this challenge. In response, we present a shift...

10.48550/arxiv.2410.24028 preprint EN arXiv (Cornell University) 2024-10-31

Machine Learning-as-a-Service systems (MLaaS) have been largely developed for cybersecurity-critical applications, such as detecting network intrusions and fake news campaigns. Despite effectiveness, their robustness against adversarial attacks is one of the key trust concerns MLaaS deployment. We are thus motivated to assess Learning models residing at core these security-critical applications with categorical inputs. Previous research efforts on accessing model manipulation inputs specific...

10.1109/bigdata55660.2022.10021026 article EN 2021 IEEE International Conference on Big Data (Big Data) 2022-12-17

Amid the maturity of machine learning, deep neural networks are gradually applied in business sector rather than be restricted laboratory. However, its intellectual property protection encounters a significant challenge. In this paper, we aim at embedding unique identity number (ID) to network for model ownership verification. To end, scheme generating DNN ID is proposed, which criterion After embedding, can complete original performance and own as well. only generated by owner check...

10.1117/12.2540293 article EN 2020-02-14

This paper explores conditional image generation with a One-Vs-All classifier based on the Generative Adversarial Networks (GANs). Instead of real/fake discriminator used in vanilla GANs, we propose to extend (GAN-OVA) that can distinguish each input data its category label. Specifically, feed certain additional information as conditions generator and take identify category. Our model be applied different divergence or distances define objective function, such Jensen-Shannon Earth-Mover (or...

10.48550/arxiv.2009.08688 preprint EN other-oa arXiv (Cornell University) 2020-01-01
Coming Soon ...