- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Advanced Neural Network Applications
- Domain Adaptation and Few-Shot Learning
- Natural Language Processing Techniques
- Topic Modeling
- Integrated Circuits and Semiconductor Failure Analysis
- Generative Adversarial Networks and Image Synthesis
- Sparse and Compressive Sensing Techniques
- Advanced Malware Detection Techniques
- Advanced Image and Video Retrieval Techniques
- Stochastic Gradient Optimization Techniques
- Explainable Artificial Intelligence (XAI)
- Advanced Graph Neural Networks
- Privacy-Preserving Technologies in Data
- COVID-19 diagnosis using AI
- Neural Networks and Applications
- Vestibular and auditory disorders
- Speech and dialogue systems
- Digital Media Forensic Detection
- Model Reduction and Neural Networks
- Multimodal Machine Learning Applications
- Cryptography and Data Security
- Autonomous Vehicle Technology and Safety
- Misinformation and Its Impacts
Drexel University
2022-2024
Wenzhou University
2024
Northwestern Polytechnical University
2024
University of California, Irvine
2023
Henan University of Science and Technology
2023
Universidad del Noreste
2019-2021
Chinese Academy of Sciences
2021
Institute of Computing Technology
2021
Tianjin First Center Hospital
2016-2020
Tianjin Medical University
2016-2020
Graph neural networks (GNNs) which apply the deep to graph data have achieved significant performance for task of semi-supervised node classification. However, only few work has addressed adversarial robustness GNNs. In this paper, we first present a novel gradient-based attack method that facilitates difficulty tackling discrete data. When comparing current attacks on GNNs, results show by perturbing small number edge perturbations, including addition and deletion, our optimization-based...
Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring trustworthiness emerges as an important topic. This paper introduces TrustLLM, a comprehensive study LLMs, including principles different dimensions trustworthiness, established benchmark, evaluation, and analysis mainstream discussion...
It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks, which implemented by adding crafted perturbations onto benign examples. Min-max robust optimization based training can provide a notion of security against attacks. However, robustness requires significantly larger capacity the network than for natural with only This paper proposes framework concurrent and weight pruning enables model compression while still preserving essentially tackles dilemma...
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used measure similarity between original image and example. However, such attacks perturbing raw input spaces may fail capture structural information hidden in input. This work develops a more general model, i.e., structured (StrAttack), which explores group sparsity perturbations by sliding mask through images aiming for extracting key spatial structures. An ADMM...
Deep neural networks (DNNs), as the basis of object detection, will play a key role in development future autonomous systems with full autonomy. The have special requirements real-time, energy-e cient implementations DNNs on power-budgeted system. Two research thrusts are dedicated to per- formance and energy e ciency enhancement inference phase DNNs. first one is model compression techniques while second hardware implementations. Recent researches extremely-low-bit CNNs such binary network...
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount input perturbation, has become core component in robustness verification and certified defense. The majority LiRPA-based methods focus on simple feed-forward networks need particular manual derivations implementations when extended to other architectures. In this paper, we develop an automatic framework enable any network structures, by...
Graph neural networks (GNNs) which apply the deep to graph data have achieved significant performance for task of semi-supervised node classification. However, only few work has addressed adversarial robustness GNNs. In this paper, we first present a novel gradient-based attack method that facilitates difficulty tackling discrete data. When comparing current attacks on GNNs, results show by perturbing small number edge perturbations, including addition and deletion, our optimization-based...
Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future advanced AI platforms that not only perform well in average cases but also worst or adverse situations. Despite long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings threat models (e.g., single distortion metric and restrictive assumption target model's feedback queries) and/or suffer from prohibitively high...
Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding generation. They possess deep comprehension, human-like text generation capabilities, contextual awareness, robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs also gained traction security community, revealing vulnerabilities showcasing their potential security-related tasks. This paper...
Recently, many graph based hashing methods have been emerged to tackle large-scale problems. However, there exists two major bottlenecks: (1) directly learning discrete codes is an NP-hardoptimization problem; (2) the complexity of both storage and computational time build a with n data points O(n2). To address these problems, in this paper, we propose novel yetsimple supervised method, asymmetric hashing, by preserving constraint building affinity matrix learn compact binary...
It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: images with imperceptible perturbations crafted fool classifiers. However, interpretability of these less explored in the literature. This work aims better understand roles and provide visual explanations from pixel, image network perspectives. We show adversaries have a promotion-suppression effect (PSE) on neurons' activations can be primarily categorized into three types: i)...
Bound propagation methods, when combined with branch and bound, are among the most effective methods to formally verify properties of deep neural networks such as correctness, robustness, safety. However, existing works cannot handle general form cutting plane constraints widely accepted in traditional solvers, which crucial for strengthening verifiers tightened convex relaxations. In this paper, we generalize bound procedure allow addition arbitrary constraints, including those involving...
Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work been conducted DNN compression or pruning. However, most of the previous took heuristic approaches. This proposes a progressive weight pruning approach based ADMM (Alternating Direction Method Multipliers), powerful technique to deal with non-convex optimization problems potentially...
Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks. In this paper, we focus on the so-called \textit{backdoor attack}, which injects backdoor trigger small portion of training data (also known as poisoning) such trained DNN induces misclassification while facing examples with trigger. To be specific, carefully study effect both real and synthetic attacks internal response...
Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits problem domain into sub-domains solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use backward mode linear relaxation based perturbation analysis (LiRPA) replace LP during BaB can be efficiently...
Although deep neural networks have achieved great success on numerous large-scale tasks, poor interpretability is still a notorious obstacle for practical applications. In this paper, we propose novel and general attention mechanism, loss-based attention, upon which modify to mine significant image patches explaining parts determine the decision-making. This inspired by fact that some contain objects or their image-level decision. Unlike previous mechanisms adopt different layers parameters...
Existing domain adaptation methods aim at learning features that can be generalized among domains. These commonly require to update source classifier adapt the target and do not properly handle trade-off between domain. In this work, instead of training a domain, we use separable component called data calibrator help fixed recover discrimination power in while preserving domain's performance. When difference two domains is small, classifier's representation sufficient perform well...
Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate vulnerability diffusion to Membership Inference Attacks (MIAs), common concern. Our results indicate that existing MIAs designed GANs or VAE are largely ineffective models, either due inapplicable scenarios (e.g., requiring discriminator GANs) inappropriate assumptions closer distances between...