Huming Qiu

ORCID: 0009-0004-5385-9414
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Security and Verification in Computing
  • Advanced Malware Detection Techniques
  • Advanced Neural Network Applications
  • Anomaly Detection Techniques and Applications
  • Multimodal Machine Learning Applications
  • Advanced Image and Video Retrieval Techniques
  • Video Surveillance and Tracking Methods
  • Network Security and Intrusion Detection
  • Radiation Detection and Scintillator Technologies
  • Software-Defined Networks and 5G
  • Semiconductor materials and devices
  • Video Analysis and Summarization
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Advanced Memory and Neural Computing
  • Smart Grid Security and Resilience

Fudan University
2024

Nanjing University of Science and Technology
2022-2024

Due to their low latency and high privacy preservation, there is currently a burgeoning demand for deploying deep learning (DL) models on ubiquitous edge Internet of Things (IoT) devices. However, DL are often large in size require large-scale computation, which prevents them from being placed directly onto IoT devices, where resources constrained, 32-bit floating-point (float-32) operations unavailable. Commercial framework (i.e., set toolkits) empowered model quantization pragmatic...

10.1109/tdsc.2023.3271956 article EN IEEE Transactions on Dependable and Secure Computing 2023-05-01

Deep learning models with backdoors act maliciously when triggered but seem normal otherwise. This risk, often increased by model outsourcing, challenges their secure use. Although countermeasures exist, defense against adaptive attacks is under-examined, possibly leading to security misjudgments. study the first intricate examination illustrating difficulty of detecting in outsourced models, especially attackers adjust strategies, even if capabilities are significantly limited. It...

10.1109/tifs.2024.3349869 article EN IEEE Transactions on Information Forensics and Security 2024-01-01

Though deep neural network models exhibit outstanding performance for various applications, their large model size and extensive floating-point operations render deployment on mobile computing platforms a major challenge, and, in particular, Internet of Things devices. One appealing solution is quantization that reduces the uses integer commonly supported by microcontrollers . To this end, 1-bit quantized DNN or binary maximizes memory efficiency, where each parameter BNN has only 1-bit. In...

10.1109/tcad.2022.3197499 article EN IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2022-08-09

Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number countermeasures developed with certain assumptions defined in their respective threat models. However, robustness is currently inadvertently ignored, which can introduce severe consequences, e.g., countermeasure be misused and result false implication detection. For first time, we critically examine existing countermeasures. As an initial study, identify five potential...

10.1109/tifs.2023.3324318 article EN IEEE Transactions on Information Forensics and Security 2023-10-13

Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number countermeasures developed with certain assumptions defined in their respective threat models. However, robustness these is inadvertently ignored, which can introduce severe consequences, e.g., countermeasure be misused and result false implication detection. For first time, we critically examine existing an initial focus on three influential model-inspection ones that are...

10.48550/arxiv.2204.06273 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Currently, there is a burgeoning demand for deploying deep learning (DL) models on ubiquitous edge Internet of Things (IoT) devices attributed to their low latency and high privacy preservation. However, DL are often large in size require large-scale computation, which prevents them from being placed directly onto IoT devices, where resources constrained 32-bit floating-point (float-32) operations unavailable. Commercial framework (i.e., set toolkits) empowered model quantization pragmatic...

10.48550/arxiv.2108.09187 preprint EN other-oa arXiv (Cornell University) 2021-01-01

In recent years, text-to-image (T2I) generation models have made significant progress in generating high-quality images that align with text descriptions. However, these also face the risk of unsafe generation, potentially producing harmful content violates usage policies, such as explicit material. Existing safe methods typically focus on suppressing inappropriate by erasing undesired concepts from visual representations, while neglecting to sanitize textual representation. Although help...

10.48550/arxiv.2411.10329 preprint EN arXiv (Cornell University) 2024-11-15

All current backdoor attacks on deep learning (DL) models fall under the category of a vertical class (VCB).In VCB attacks, any sample from activates implanted when secret trigger is present, regardless whether it sub-type source-class-agnostic or source-class-specific backdoor. For example, sunglasses could mislead facial recognition model either an arbitrary (source-class-agnostic) specific (source-class-specific) person wears sunglasses. Existing defense strategiesoverwhelmingly focus...

10.1145/3658644.3670361 article EN 2024-12-02

Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning a Service (MLaaS) to provide inference services.To steal model architectures that of valuable intellectual properties, class attacks has been proposed via different side-channel leakage, posing serious security challenge MLaaS. Also targeting MLaaS, we propose new end-to-end attack, DeepTheft, accurately recover complex DNN on general processors the RAPL-based power side channel. However, an...

10.48550/arxiv.2309.11894 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Deep neural networks (DNNs) are susceptible to backdoor attacks, where malicious functionality is embedded allow attackers trigger incorrect classifications. Old-school attacks use strong features that can easily be learned by victim models. Despite robustness against input variation, the however increases likelihood of unintentional activations. This leaves traces existing defenses, which find approximate replacements for original triggers activate without being identical via, e.g., reverse...

10.48550/arxiv.2312.04902 preprint EN other-oa arXiv (Cornell University) 2023-01-01

The development of unsupervised hashing is advanced by the recent popular contrastive learning paradigm. However, previous learning-based works have been hampered (1) insufficient data similarity mining based on global-only image representations, and (2) hash code semantic loss caused augmentation. In this paper, we propose a novel method, namely Weighted Contrative Hashing (WCH), to take step towards solving these two problems. We introduce mutual attention module alleviate problem...

10.48550/arxiv.2209.14099 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...