- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Video Surveillance and Tracking Methods
- Domain Adaptation and Few-Shot Learning
- Visual Attention and Saliency Detection
- Advanced Image and Video Retrieval Techniques
- Generative Adversarial Networks and Image Synthesis
- Bacillus and Francisella bacterial research
- Advanced Neural Network Applications
- Natural Language Processing Techniques
- Human Pose and Action Recognition
- Digital Media Forensic Detection
- Tactile and Sensory Interactions
- Emotion and Mood Recognition
- Multimodal Machine Learning Applications
- Sentiment Analysis and Opinion Mining
- Face recognition and analysis
- Advanced Image Processing Techniques
- Machine Learning and Data Classification
- Topic Modeling
- Advanced Image Fusion Techniques
- Model Reduction and Neural Networks
- Computer Graphics and Visualization Techniques
- Biometric Identification and Security
- Image Retrieval and Classification Techniques
Fudan University
2021-2025
Kunming University of Science and Technology
2023-2025
Northwest Normal University
2025
Xinjiang University
2024
Shanghai Sixth People's Hospital
2024
Shanghai Jiao Tong University
2024
Beijing University of Posts and Telecommunications
2024
Shandong University of Traditional Chinese Medicine
2021-2024
Hebei University of Technology
2023
Guangzhou University of Chinese Medicine
2022
Abstract Background In the urgent campaign to develop therapeutics against SARS-CoV-2, natural products have been an important source of new lead compounds. Results We herein identified two products, ginkgolic acid and anacardic acid, as inhibitors using a high-throughput screen targeting SARS-CoV-2 papain-like protease (PL pro ). Moreover, our study demonstrated that hit compounds are dual 3-chymotrypsin-like (3CL ) in addition PL . A mechanism action enzyme kinetics further characterized...
Malicious applications of deepfakes (i.e., technologies generating target facial attributes or entire faces from images) have posed a huge threat to individuals' reputation and security. To mitigate these threats, recent studies proposed adversarial watermarks combat deepfake models, leading them generate distorted outputs. Despite achieving impressive results, low image-level model-level transferability, meaning that they can protect only one image specific model. address issues, we propose...
Patch attacks, one of the most threatening forms physical attack in adversarial examples, can lead networks to induce misclassification by modifying pixels arbitrarily a continuous region. Certifiable patch defense guarantee robustness that classifier is not affected attacks. Existing certifiable defenses sacrifice clean accuracy classifiers and only obtain low certified on toy datasets. Furthermore, these methods still significantly lower than normal classification networks, which limits...
Context-Aware Emotion Recognition (CAER) is a crucial and challenging task that aims to perceive the emotional states of target person with contextual information. Recent approaches invariably focus on designing sophisticated architectures or mechanisms extract seemingly meaningful representations from subjects contexts. However, long-overlooked issue context bias in existing datasets leads significantly unbalanced distribution among different scenarios. Concretely, harmful confounder...
Current works of facial expression learning in video consume significant computational resources to learn spatial channel feature representations and temporal relationships. To mitigate this issue, we propose a Dual Path multi-excitation Collaborative Network (DPCNet) the critical information for representation from fewer keyframes videos. Specifically, DPCNet learns important regions tuple four view-grouped frames by modules produces dual-path one with consistency under two regularization...
Recently, adversarial attacks have been applied in visual object tracking to deceive deep trackers by injecting imperceptible perturbations into video frames. However, previous work only generates the video-specific perturbations, which restricts its application scenarios. In addition, existing are difficult implement reality due real-time of and re-initialization mechanism. To address these issues, we propose an offline universal attack called Efficient Universal Shuffle Attack. It takes...
Existing video object segmentation (VOS) benchmarks focus on short-term videos which just last about 3-5 seconds and where objects are visible most of the time. These poorly representative practical applications, absence long-term datasets restricts further investigation VOS application in realistic scenarios. So, this paper, we present a new benchmark dataset named LVOS, consists 220 with total duration 421 minutes. To best our knowledge, LVOS is first densely annotated dataset. The 1.59...
Driver distraction has become a significant cause of severe traffic accidents over the past decade. Despite growing development vision-driven driver monitoring systems, lack comprehensive perception datasets restricts road safety and security. In this paper, we present an AssIstive Driving pErception dataset (AIDE) that considers context information both inside outside vehicle in naturalistic scenarios. AIDE facilitates holistic through three distinctive characteristics, including multi-view...
Learning a policy with great generalization to unseen environments remains challenging but critical in visual reinforcement learning. Despite the success of augmentation combination supervised learning generalization, naively applying it RL algorithms may damage training efficiency, suffering from serve performance degradation. In this paper, we first conduct qualitative analysis and illuminate main causes: (i) high-variance gradient magnitudes (ii) conflicts existed various methods. To...
Nowadays, general object detectors like YOLO and Faster R-CNN as well their variants are widely exploited in many applications. Many works have revealed that these extremely vulnerable to adversarial patch attacks. The perturbed regions generated by previous patch-based attack on very large which not necessary for attacking perceptible human eyes. To generate much less but more efficient perturbation, we propose a novel method detectors. Firstly, selection refining scheme find the pixels...
Adversarial Robustness Distillation (ARD) is a novel method to boost the robustness of small models. Unlike general adversarial training, its robust knowledge transfer can be less easily restricted by model capacity. However, teacher that provides does not always make correct predictions, interfering with student’s performance. Besides, in previous ARD methods, comes entirely from one-to-one imitation, ignoring relationship between examples. To this end, we propose structured called...
While existing face anti-spoofing (FAS) methods have achieved high performance on in-domain datasets, good generalization is crucial for their real-world application. Previous domain generalizable FAS attempted to identify common features of live samples from different domains in the spatial domain, but finding such challenging. To address this issue, we propose a solution problem frequency domain. Specifically, novel Frequency-domain Augmentation Masked Image Model Framework (FAMIM)...
Adversarial Robustness Distillation (ARD) is a promising task to solve the issue of limited adversarial robustness small capacity models while optimizing expensive computational costs Training (AT). Despite good robust performance, existing ARD methods are still impractical deploy in natural high-security scenes due these rely entirely on original or publicly available data with similar distribution. In fact, almost always private, specific, and distinctive for that require high robustness....
Deep neural networks (DNNs) have been showed to be highly vulnerable imperceptible adversarial perturbations. As a complementary type of adversary, patch attacks that introduce perceptible perturbations the images attracted interest researchers. Existing rely on architecture model or probabilities predictions and perform poorly in decision-based setting, which can still construct perturbation with minimal information exposed – top-1 predicted label. In this work, we first explore attack. To...
Deep neural networks are vulnerable to adversarial examples that exhibit transferability across various models. Numerous approaches proposed enhance the of examples, including advanced optimization, data augmentation, and model modifications. However, these methods still show limited transferability, particularly in cross-architecture scenarios, such as from CNN ViT. To achieve high we propose a technique termed Spatial Adversarial Alignment (SAA), which employs an alignment loss leverages...
Diffusion Models (DMs) have impressive capabilities among generation models, but are limited to slower inference speeds and higher computational costs. Previous works utilize one-shot structure pruning derive lightweight DMs from pre-trained ones, this approach often leads a significant drop in quality may result the removal of crucial weights. Thus we propose iterative method based on gradient flow, including flow process criterion. We employ progressive soft strategy maintain continuity...
Recent work indicates that video recognition models are vulnerable to adversarial examples, posing a serious security risk downstream applications. However, current research has primarily focused on attacks, with limited exploring defense mechanisms. Furthermore, due the spatial-temporal complexity of videos, existing methods face issues high cost, overfitting, and performance. Recently, diffusion-based purification have achieved robust performance in image domain. additional temporal...