Kaidi Xu

ORCID: 0000-0003-4437-0671
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Advanced Neural Network Applications
  • Domain Adaptation and Few-Shot Learning
  • Natural Language Processing Techniques
  • Topic Modeling
  • Integrated Circuits and Semiconductor Failure Analysis
  • Generative Adversarial Networks and Image Synthesis
  • Sparse and Compressive Sensing Techniques
  • Advanced Malware Detection Techniques
  • Advanced Image and Video Retrieval Techniques
  • Stochastic Gradient Optimization Techniques
  • Explainable Artificial Intelligence (XAI)
  • Advanced Graph Neural Networks
  • Privacy-Preserving Technologies in Data
  • COVID-19 diagnosis using AI
  • Neural Networks and Applications
  • Vestibular and auditory disorders
  • Speech and dialogue systems
  • Digital Media Forensic Detection
  • Model Reduction and Neural Networks
  • Multimodal Machine Learning Applications
  • Cryptography and Data Security
  • Autonomous Vehicle Technology and Safety
  • Misinformation and Its Impacts

Drexel University
2022-2024

Wenzhou University
2024

Northwestern Polytechnical University
2024

University of California, Irvine
2023

Henan University of Science and Technology
2023

Universidad del Noreste
2019-2021

Chinese Academy of Sciences
2021

Institute of Computing Technology
2021

Tianjin First Center Hospital
2016-2020

Tianjin Medical University
2016-2020

Graph neural networks (GNNs) which apply the deep to graph data have achieved significant performance for task of semi-supervised node classification. However, only few work has addressed adversarial robustness GNNs. In this paper, we first present a novel gradient-based attack method that facilitates difficulty tackling discrete data. When comparing current attacks on GNNs, results show by perturbing small number edge perturbations, including addition and deletion, our optimization-based...

10.24963/ijcai.2019/550 article EN 2019-07-28

Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring trustworthiness emerges as an important topic. This paper introduces TrustLLM, a comprehensive study LLMs, including principles different dimensions trustworthiness, established benchmark, evaluation, and analysis mainstream discussion...

10.48550/arxiv.2401.05561 preprint EN cc-by-nc-sa arXiv (Cornell University) 2024-01-01

It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks, which implemented by adding crafted perturbations onto benign examples. Min-max robust optimization based training can provide a notion of security against attacks. However, robustness requires significantly larger capacity the network than for natural with only This paper proposes framework concurrent and weight pruning enables model compression while still preserving essentially tackles dilemma...

10.1109/iccv.2019.00020 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2019-10-01

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used measure similarity between original image and example. However, such attacks perturbing raw input spaces may fail capture structural information hidden in input. This work develops a more general model, i.e., structured (StrAttack), which explores group sparsity perturbations by sliding mask through images aiming for extracting key spatial structures. An ADMM...

10.48550/arxiv.1808.01664 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Deep neural networks (DNNs), as the basis of object detection, will play a key role in development future autonomous systems with full autonomy. The have special requirements real-time, energy-e cient implementations DNNs on power-budgeted system. Two research thrusts are dedicated to per- formance and energy e ciency enhancement inference phase DNNs. first one is model compression techniques while second hardware implementations. Recent researches extremely-low-bit CNNs such binary network...

10.1145/3289602.3293904 article EN 2019-02-20

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount input perturbation, has become core component in robustness verification and certified defense. The majority LiRPA-based methods focus on simple feed-forward networks need particular manual derivations implementations when extended to other architectures. In this paper, we develop an automatic framework enable any network structures, by...

10.48550/arxiv.2002.12920 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Graph neural networks (GNNs) which apply the deep to graph data have achieved significant performance for task of semi-supervised node classification. However, only few work has addressed adversarial robustness GNNs. In this paper, we first present a novel gradient-based attack method that facilitates difficulty tackling discrete data. When comparing current attacks on GNNs, results show by perturbing small number edge perturbations, including addition and deletion, our optimization-based...

10.48550/arxiv.1906.04214 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future advanced AI platforms that not only perform well in average cases but also worst or adverse situations. Despite long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings threat models (e.g., single distortion metric and restrictive assumption target model's feedback queries) and/or suffer from prohibitively high...

10.1109/iccv.2019.00021 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2019-10-01

Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding generation. They possess deep comprehension, human-like text generation capabilities, contextual awareness, robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs also gained traction security community, revealing vulnerabilities showcasing their potential security-related tasks. This paper...

10.48550/arxiv.2312.02003 preprint EN public-domain arXiv (Cornell University) 2023-01-01

Recently, many graph based hashing methods have been emerged to tackle large-scale problems. However, there exists two major bottlenecks: (1) directly learning discrete codes is an NP-hardoptimization problem; (2) the complexity of both storage and computational time build a with n data points O(n2). To address these problems, in this paper, we propose novel yetsimple supervised method, asymmetric hashing, by preserving constraint building affinity matrix learn compact binary...

10.1609/aaai.v31i1.10831 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2017-02-13

It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: images with imperceptible perturbations crafted fool classifiers. However, interpretability of these less explored in the literature. This work aims better understand roles and provide visual explanations from pixel, image network perspectives. We show adversaries have a promotion-suppression effect (PSE) on neurons' activations can be primarily categorized into three types: i)...

10.48550/arxiv.1904.02057 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Bound propagation methods, when combined with branch and bound, are among the most effective methods to formally verify properties of deep neural networks such as correctness, robustness, safety. However, existing works cannot handle general form cutting plane constraints widely accepted in traditional solvers, which crucial for strengthening verifiers tightened convex relaxations. In this paper, we generalize bound procedure allow addition arbitrary constraints, including those involving...

10.48550/arxiv.2208.05740 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work been conducted DNN compression or pruning. However, most of the previous took heuristic approaches. This proposes a progressive weight pruning approach based ADMM (Alternating Direction Method Multipliers), powerful technique to deal with non-convex optimization problems potentially...

10.48550/arxiv.1810.07378 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks. In this paper, we focus on the so-called \textit{backdoor attack}, which injects backdoor trigger small portion of training data (also known as poisoning) such trained DNN induces misclassification while facing examples with trigger. To be specific, carefully study effect both real and synthetic attacks internal response...

10.48550/arxiv.2002.12162 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits problem domain into sub-domains solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use backward mode linear relaxation based perturbation analysis (LiRPA) replace LP during BaB can be efficiently...

10.48550/arxiv.2011.13824 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Although deep neural networks have achieved great success on numerous large-scale tasks, poor interpretability is still a notorious obstacle for practical applications. In this paper, we propose novel and general attention mechanism, loss-based attention, upon which modify to mine significant image patches explaining parts determine the decision-making. This inspired by fact that some contain objects or their image-level decision. Unlike previous mechanisms adopt different layers parameters...

10.1109/tip.2020.3046875 article EN cc-by IEEE Transactions on Image Processing 2020-12-31

Existing domain adaptation methods aim at learning features that can be generalized among domains. These commonly require to update source classifier adapt the target and do not properly handle trade-off between domain. In this work, instead of training a domain, we use separable component called data calibrator help fixed recover discrimination power in while preserving domain's performance. When difference two domains is small, classifier's representation sufficient perform well...

10.1109/cvpr42600.2020.01375 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020-06-01

Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate vulnerability diffusion to Membership Inference Attacks (MIAs), common concern. Our results indicate that existing MIAs designed GANs or VAE are largely ineffective models, either due inapplicable scenarios (e.g., requiring discriminator GANs) inappropriate assumptions closer distances between...

10.48550/arxiv.2302.01316 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...