- Advanced Malware Detection Techniques
- Cryptography and Data Security
- Privacy-Preserving Technologies in Data
- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Stochastic Gradient Optimization Techniques
- Cryptographic Implementations and Security
- Multimedia Communication and Technology
- Network Security and Intrusion Detection
- Bacillus and Francisella bacterial research
- Internet Traffic Analysis and Secure E-voting
- Telecommunications and Broadcasting Technologies
- Ferroelectric and Negative Capacitance Devices
- Advanced Memory and Neural Computing
- Security and Verification in Computing
- Forensic Toxicology and Drug Analysis
- Autonomous Vehicle Technology and Safety
- Advanced Frequency and Time Standards
- Forensic and Genetic Research
- Advanced Neural Network Applications
- Digital Media Forensic Detection
- Power Line Communications and Noise
- Radiation Effects in Electronics
- Multimodal Machine Learning Applications
- Biometric Identification and Security
Nanyang Technological University
2021-2025
Wuhan University of Science and Technology
2024-2025
As an increasingly prevalent technology in intelligent autonomous transportation systems, vehicle platoon has been indicated the ability to significantly reduce fuel consumption as well heighten highway safety and throughput. However, existing efforts rarely focus on protecting data confidentiality authenticity platoons. How ensure secure high-fidelity platoon-level communication is still its infancy. This paper makes first attempt for efficient across Specifically, we present <monospace...
Robotic Vehicles (RVs) have gained great popularity over the past few years. Meanwhile, they are also demonstrated to be vulnerable sensor spoofing attacks. Although a wealth of research works presented various attacks, some key questions remain unanswered: these existing complete enough cover all threats? If not, how many attacks not explored, and difficult is it realize them?This paper answers above by comprehensively systematizing knowledge against RVs. Our contributions threefold. (1) We...
Multi-objective evolutionary algorithms (MOEAs) are widely used for searching optimal solutions in complex multi-component applications. Traditional MOEAs deep learning (MCDL) systems face challenges enhancing the search efficiency while maintaining diversity. To combat these, this paper proposes $\mu$MOEA, first LLM-empowered adaptive algorithm to detect safety violations MCDL systems. Inspired by context-understanding ability of Large Language Models (LLMs), $\mu$MOEA promotes LLM...
Multi-objective evolutionary algorithms (MOEAs) are widely used for searching optimal solutions in complex multi-component applications. Traditional MOEAs deep learning (MCDL) systems face challenges enhancing the search efficiency while maintaining diversity. To combat these, this paper proposes first LLM-empowered adaptive algorithm to detect safety violations MCDL systems. Inspired by context-understanding ability of Large Language Models (LLMs), our approach promotes LLM comprehend...
Autonomous Vehicles (AVs) are closely connected in the Cooperative Intelligent Transportation System (C-ITS). They equipped with various sensors and controlled by Driving Systems (ADSs) to provide high-level autonomy. The vehicles exchange different types of real-time data each other, which can help reduce traffic accidents congestion, improve efficiency transportation systems. However, when interacting environment, AVs suffer from a broad attack surface, sensory susceptible anomalies caused...
Modern autonomous vehicles adopt state-of-the-art DNN models to interpret the sensor data and perceive environment. However, are vulnerable different types of adversarial attacks, which pose significant risks security safety passengers. One prominent threat is backdoor attack, where adversary can compromise model by poisoning training samples. Although lots effort has been devoted investigation attack conventional computer vision tasks, its practicality applicability driving scenario rarely...
In this paper, we address the problem of privacy-preserving federated neural network training with <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula> users. We present <b>Hercules</b> , an efficient and high-precision framework that can tolerate collusion up to notation="LaTeX">$N-1$</tex-math></inline-formula> follows POSEIDON proposed by Sav et al. (NDSS'21), but makes a qualitative leap in performance following contributions: (i) design novel parallel homomorphic...
In this paper, we study the problem of secure ML inference against a malicious client and semi-trusted server such that only learns output while nothing. This is first formulated by Lehmkuhl <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> with solution (MUSE, Usenix Security'21), whose performance then substantially improved Chandran 's work (SIMC, USENIX Security'22). However, there still exists nontrivial gap in these efforts...
In this paper, we present <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VerifyML</i> , the first secure inference framework to check fairness degree of a given Machine learning (ML) model. is generic and immune any obstruction by malicious model holder during verification process. We rely on two-party computation (2 PC) technology implement carefully customize series optimization methods boost its performance for both linear nonlinear layer...
Backdoor attacks against deep neural network (DNN) models have been widely studied. Various attack techniques proposed for different domains and paradigms, e.g., image, point cloud, natural language processing, transfer learning, etc. The most widely-used way to embed a backdoor into DNN model is poison the training data. They usually randomly select samples from benign set poisoning, without considering distinct contribution of each sample effectiveness, making less optimal.A recent work...
Cloud computing is the widespread acceptance of a promising paradigm offering substantial amount storage and data services on demand. To preserve confidentiality, many cryptosystems have been introduced. However, current solutions are incompatible with resource-constrained end-devices because variety vulnerabilities in terms practicality security. In this paper, we propose practical secure data-sharing system by introducing new design attribute-based encryption verifiable outsourced...
Autonomous Vehicles (AVs) are equipped with various sensors and controlled by Driving Systems (ADSs) to provide high-level autonomy. When interacting the environment, AVs suffer from a broad attack surface, sensory data susceptible anomalies caused faults, sensor malfunctions, or attacks, which may jeopardize traffic safety result in serious accidents. Most of current works focus on anomaly detection specific such as GPS spoofing sign attacks. There no scenario-aware for ADSs. In this paper,...
Video anomaly detection (VAD) is an essential but challenging task. Existing prevalent methods focus on analyzing the reconstruction or prediction difference between normal and abnormal patterns through multiple deep features, e.g., optic flow. However, these approaches independently use features to characterize attributes, ignore mutuality among features. Therefore, constructed representation limited indirectly representing from isolated makes network difficult capture high-level causes of...
The safety of Autonomous Driving Systems (ADSs) is significantly important for the implementation autonomous vehicles (AVs). Therefore, ADSs must be evaluated thoroughly before their release and deployment to public. How generate diverse safety-critical test scenarios a key task ADS testing. This paper proposes LEADE, an LLM-enhanced scenario generation approach testing, which adopts adaptive evolutionary search scenarios. LEADE leverages LLM's ability in program understanding better...
Federated Learning (FL) suffers from severe performance degradation due to the data heterogeneity among clients. Some existing work suggests that fundamental reason is can cause local model drift, and therefore proposes calibrate direction of updates solve this problem. Though effective, methods generally take as a whole, which lacks deep understanding how neurons within classification models evolve during training form drift. In paper, we bridge gap by performing an intuitive theoretical...
Physical adversarial patches have emerged as a key attack to cause misclassification of traffic sign recognition (TSR) systems in the real world. However, existing poor stealthiness and all vehicles indiscriminately once deployed. In this paper, we introduce an invisible triggered physical patch (ITPatch) with novel vector, i.e., fluorescent ink, advance state-of-the-art. It applies carefully designed perturbations target sign, attacker can later trigger effect using ultraviolet light,...
Federated learning (FL) enables the training of deep models on distributed clients to preserve data privacy. However, this paradigm is vulnerable backdoor attacks, where malicious can upload poisoned local embed backdoors into global model, leading attacker-desired predictions. Existing attacks mainly focus FL with independently and identically (IID) scenarios, while real-world are typically non-IID. Current strategies for non-IID suffer from limitations in maintaining effectiveness...
Fine-tuning is an essential process to improve the performance of Large Language Models (LLMs) in specific domains, with Parameter-Efficient Fine-Tuning (PEFT) gaining popularity due its capacity reduce computational demands through integration low-rank adapters. These lightweight adapters, such as LoRA, can be shared and utilized on open-source platforms. However, adversaries could exploit this mechanism inject backdoors into these resulting malicious behaviors like incorrect or harmful...
Existing defense approaches against sensor spoofing attacks suffer from the limitations of limited specific attack types, requiring GPU computation, exhibiting considerable detection latency and struggling with interpretability corner cases. We developed PhyScout, a holistic framework to overcome above limitations. Our capitalizes on observation that human drivers can rapidly accurately identify by performing spatio-temporal consistency checks their environment. commence defining generalized...
In this paper, we study the problem of secure ML inference against a malicious client and semi-trusted server such that only learns output while nothing. This is first formulated by Lehmkuhl \textit{et al.} with solution (MUSE, Usenix Security'21), whose performance then substantially improved Chandran et al.'s work (SIMC, USENIX Security'22). However, there still exists nontrivial gap in these efforts towards practicality, giving challenges overhead reduction acceleration an all-round way....
The rapidly expanding number of Internet Things (IoT) devices is generating huge quantities data, but the data privacy and security exposure in IoT devices, especially automatic driving system. Federated learning (FL) a paradigm that addresses privacy, security, access rights, to heterogeneous message issues by integrating global model based on distributed nodes. However, poisoning attacks FL can undermine benefits, destroying model's availability disrupting training. To avoid above issues,...