- Adversarial Robustness in Machine Learning
- Security and Verification in Computing
- Advanced Malware Detection Techniques
- Advanced Neural Network Applications
- Anomaly Detection Techniques and Applications
- Multimodal Machine Learning Applications
- Advanced Image and Video Retrieval Techniques
- Video Surveillance and Tracking Methods
- Network Security and Intrusion Detection
- Radiation Detection and Scintillator Technologies
- Software-Defined Networks and 5G
- Semiconductor materials and devices
- Video Analysis and Summarization
- Physical Unclonable Functions (PUFs) and Hardware Security
- Advanced Memory and Neural Computing
- Smart Grid Security and Resilience
Fudan University
2024
Nanjing University of Science and Technology
2022-2024
Due to their low latency and high privacy preservation, there is currently a burgeoning demand for deploying deep learning (DL) models on ubiquitous edge Internet of Things (IoT) devices. However, DL are often large in size require large-scale computation, which prevents them from being placed directly onto IoT devices, where resources constrained, 32-bit floating-point (float-32) operations unavailable. Commercial framework (i.e., set toolkits) empowered model quantization pragmatic...
Deep learning models with backdoors act maliciously when triggered but seem normal otherwise. This risk, often increased by model outsourcing, challenges their secure use. Although countermeasures exist, defense against adaptive attacks is under-examined, possibly leading to security misjudgments. study the first intricate examination illustrating difficulty of detecting in outsourced models, especially attackers adjust strategies, even if capabilities are significantly limited. It...
Though deep neural network models exhibit outstanding performance for various applications, their large model size and extensive floating-point operations render deployment on mobile computing platforms a major challenge, and, in particular, Internet of Things devices. One appealing solution is quantization that reduces the uses integer commonly supported by microcontrollers . To this end, 1-bit quantized DNN or binary maximizes memory efficiency, where each parameter BNN has only 1-bit. In...
Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number countermeasures developed with certain assumptions defined in their respective threat models. However, robustness is currently inadvertently ignored, which can introduce severe consequences, e.g., countermeasure be misused and result false implication detection. For first time, we critically examine existing countermeasures. As an initial study, identify five potential...
Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number countermeasures developed with certain assumptions defined in their respective threat models. However, robustness these is inadvertently ignored, which can introduce severe consequences, e.g., countermeasure be misused and result false implication detection. For first time, we critically examine existing an initial focus on three influential model-inspection ones that are...
Currently, there is a burgeoning demand for deploying deep learning (DL) models on ubiquitous edge Internet of Things (IoT) devices attributed to their low latency and high privacy preservation. However, DL are often large in size require large-scale computation, which prevents them from being placed directly onto IoT devices, where resources constrained 32-bit floating-point (float-32) operations unavailable. Commercial framework (i.e., set toolkits) empowered model quantization pragmatic...
In recent years, text-to-image (T2I) generation models have made significant progress in generating high-quality images that align with text descriptions. However, these also face the risk of unsafe generation, potentially producing harmful content violates usage policies, such as explicit material. Existing safe methods typically focus on suppressing inappropriate by erasing undesired concepts from visual representations, while neglecting to sanitize textual representation. Although help...
All current backdoor attacks on deep learning (DL) models fall under the category of a vertical class (VCB).In VCB attacks, any sample from activates implanted when secret trigger is present, regardless whether it sub-type source-class-agnostic or source-class-specific backdoor. For example, sunglasses could mislead facial recognition model either an arbitrary (source-class-agnostic) specific (source-class-specific) person wears sunglasses. Existing defense strategiesoverwhelmingly focus...
Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning a Service (MLaaS) to provide inference services.To steal model architectures that of valuable intellectual properties, class attacks has been proposed via different side-channel leakage, posing serious security challenge MLaaS. Also targeting MLaaS, we propose new end-to-end attack, DeepTheft, accurately recover complex DNN on general processors the RAPL-based power side channel. However, an...
Deep neural networks (DNNs) are susceptible to backdoor attacks, where malicious functionality is embedded allow attackers trigger incorrect classifications. Old-school attacks use strong features that can easily be learned by victim models. Despite robustness against input variation, the however increases likelihood of unintentional activations. This leaves traces existing defenses, which find approximate replacements for original triggers activate without being identical via, e.g., reverse...
The development of unsupervised hashing is advanced by the recent popular contrastive learning paradigm. However, previous learning-based works have been hampered (1) insufficient data similarity mining based on global-only image representations, and (2) hash code semantic loss caused augmentation. In this paper, we propose a novel method, namely Weighted Contrative Hashing (WCH), to take step towards solving these two problems. We introduce mutual attention module alleviate problem...