- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Face recognition and analysis
- Advanced Malware Detection Techniques
- Biometric Identification and Security
- Cryptography and Data Security
- Digital Media Forensic Detection
- Privacy-Preserving Technologies in Data
- User Authentication and Security Systems
- Coding theory and cryptography
- Cryptographic Implementations and Security
- Cryptography and Residue Arithmetic
- Advanced Neural Network Applications
- Explainable Artificial Intelligence (XAI)
- Domain Adaptation and Few-Shot Learning
- Experimental Learning in Engineering
- Machine Learning and Data Classification
- Security and Verification in Computing
- Cinema and Media Studies
- Recommender Systems and Techniques
- Speech Recognition and Synthesis
- Mathematics Education and Programs
- Mathematics Education and Teaching Techniques
- Data Mining Algorithms and Applications
- Aesthetic Perception and Analysis
Duke University
2024
University of Chicago
2020-2023
Berkeley College
2023
University of California, Berkeley
2023
Cornell University
2023
University of Illinois Chicago
2020-2022
Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific "trigger." Existing works backdoor defenses, however, mostly focus digital that apply digitally generated patterns as triggers. A critical question remains unanswered: "can succeed using physical objects triggers, thus making them credible threat against systems in the real world?"We conduct detailed empirical study to explore...
Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate models individuals without their knowledge. We need tools protect ourselves from potential misuses unauthorized systems. Unfortunately, no practical or effective solutions exist. In this paper, we propose Fawkes, system that helps inoculate images against models. Fawkes achieves by helping users...
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, can learn mimic artistic style of specific artists after "fine-tuning" on samples their art. this paper, we describe design, implementation evaluation Glaze, a tool that enables apply "style cloaks" art before sharing online. These cloaks barely perceptible perturbations images, when used training data, mislead generative try artist....
Deep neural networks (DNN) are known to be vulnerable adversarial attacks. Numerous efforts either try patch weaknesses in trained models, or make it difficult costly compute examples that exploit them. In our work, we explore a new "honeypot" approach protect DNN models. We intentionally inject trapdoors, honeypot the classification manifold attract attackers searching for examples. Attackers' optimization algorithms gravitate towards leading them produce attacks similar trapdoors feature...
As companies continue to invest heavily in larger, more accurate and robust deep learning models, they are exploring approaches monetize their models while protecting intellectual property. Model licensing is promising, but requires a tool for owners claim ownership of i.e. watermark. Unfortunately, current designs have not been able address piracy attacks, where third parties falsely model by embedding own "pirate watermarks" into an already-watermarked model. We observe that resistance...
Advances in deep learning have introduced a new wave of voice synthesis tools, capable producing audio that sounds as if spoken by target speaker. If successful, such tools the wrong hands will enable range powerful attacks against both humans and software systems (aka machines). This paper documents efforts findings from comprehensive experimental study on impact deep-learning based speech human listeners machines speaker recognition voice-signin systems. We find can be reliably fooled...
The rapid adoption of facial recognition (FR) technology by both government and commercial entities in recent years has raised concerns about civil liberties privacy. In response, a broad suite so-called "anti-facial recognition" (AFR) tools been developed to help users avoid unwanted recognition. set AFR proposed the last few is wide-ranging rapidly evolving, necessitating step back consider broader design space systems long-term challenges. This paper aims fill that gap provides first...
Learning with Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST [13] based on module LWE [2], and current publicly available PQ Homomorphic Encryption (HE) libraries are ring LWE. security of LWE-based cryptosystems critical, but certain implementation choices could weaken them. One such choice sparse binary secrets, desirable for HE schemes efficiency reasons. Prior work...
Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, "quantum resistant" are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged strong contenders for standardization. In this work, we train transformers perform modular arithmetic combine half-trained models with statistical cryptanalysis techniques propose SALSA: machine learning attack LWE-based...
Learning with Errors (LWE) is a hard math problem underlying recently standardized post-quantum cryptography (PQC) systems for key exchange and digital signatures. Prior work proposed new machine learning (ML)-based attacks on LWE problems small, sparse secrets, but these require millions of samples to train take days recover secrets. We propose three methods -- better preprocessing, angular embeddings model pre-training improve attacks, speeding up preprocessing by $25\times$ improving...
Deep learning systems are known to be vulnerable adversarial examples. In particular, query-based black-box attacks do not require knowledge of the deep model, but can compute examples over network by submitting queries and inspecting returns. Recent work largely improves efficiency those attacks, demonstrating their practicality on today's ML-as-a-service platforms. We propose Blacklight, a new defense against attacks. The fundamental insight driving our design is that, examples, these...
Learning with Errors (LWE) is a hard math problem used in post-quantum cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of LWE for their security, and two LWE-based cryptosystems were recently standardized by NIST digital signatures key exchange (KEM). Thus, it critical to continue assessing security specific parameter choices. For example, HE uses secrets small entries, community has considered standardizing sparse improve efficiency functionality. However, prior work,...
Server breaches are an unfortunate reality on today's Internet. In the context of deep neural network (DNN) models, they particularly harmful, because a leaked model gives attacker "white-box" access to generate adversarial examples, threat that has no practical robust defenses. For practitioners who have invested years and millions into proprietary DNNs, e.g. medical imaging, this seems like inevitable disaster looming horizon. paper, we consider problem post-breach recovery for DNN models....
Sparse binary LWE secrets are under consideration for standardization Homomorphic Encryption and its applications to private computation. Known attacks on sparse include the dual attack hybrid dual-meet in middle which requires significant memory. In this paper, we provide a new statistical with low memory requirement. The relies some initial lattice reduction. key observation is that, after reduction applied rows of q-ary-like embedded random matrix $\mathbf A$, entries high variance...
Modular addition is, on its face, a simple operation: given $N$ elements in $\mathbb{Z}_q$, compute their sum modulo $q$. Yet, scalable machine learning solutions to this problem remain elusive: prior work trains ML models that $N \le 6$ mod $q 1000$. Promising applications of for cryptanalysis-which often involve modular arithmetic with large and $q$-motivate reconsideration problem. This proposes three changes the model training pipeline: more diverse data, an angular embedding, custom...
Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems, and HomomorphicEncryption.org encrypted compute sensitive data. Thus, understanding their concrete security is critical. Most work LWE focuses theoretical estimates of attack performance, which important but may overlook nuances arising in real-world implementations. The sole existing benchmarking effort, Darmstadt Challenge, does not...
Today, creators of data-hungry deep neural networks (DNNs) scour the Internet for training fodder, leaving users with little control over or knowledge when their data, and in particular images, are used to train models. To empower counteract unwanted use we design, implement evaluate a practical system that enables detect if data was DNN model image classification. We show how can create special images call isotopes, which introduce ``spurious features'' into DNNs during training. With only...