Emily Wenger

ORCID: 0009-0006-3346-8226
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Face recognition and analysis
  • Advanced Malware Detection Techniques
  • Biometric Identification and Security
  • Cryptography and Data Security
  • Digital Media Forensic Detection
  • Privacy-Preserving Technologies in Data
  • User Authentication and Security Systems
  • Coding theory and cryptography
  • Cryptographic Implementations and Security
  • Cryptography and Residue Arithmetic
  • Advanced Neural Network Applications
  • Explainable Artificial Intelligence (XAI)
  • Domain Adaptation and Few-Shot Learning
  • Experimental Learning in Engineering
  • Machine Learning and Data Classification
  • Security and Verification in Computing
  • Cinema and Media Studies
  • Recommender Systems and Techniques
  • Speech Recognition and Synthesis
  • Mathematics Education and Programs
  • Mathematics Education and Teaching Techniques
  • Data Mining Algorithms and Applications
  • Aesthetic Perception and Analysis

Duke University
2024

University of Chicago
2020-2023

Berkeley College
2023

University of California, Berkeley
2023

Cornell University
2023

University of Illinois Chicago
2020-2022

Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific "trigger." Existing works backdoor defenses, however, mostly focus digital that apply digitally generated patterns as triggers. A critical question remains unanswered: "can succeed using physical objects triggers, thus making them credible threat against systems in the real world?"We conduct detailed empirical study to explore...

10.1109/cvpr46437.2021.00614 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate models individuals without their knowledge. We need tools protect ourselves from potential misuses unauthorized systems. Unfortunately, no practical or effective solutions exist. In this paper, we propose Fawkes, system that helps inoculate images against models. Fawkes achieves by helping users...

10.48550/arxiv.2002.08327 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, can learn mimic artistic style of specific artists after "fine-tuning" on samples their art. this paper, we describe design, implementation evaluation Glaze, a tool that enables apply "style cloaks" art before sharing online. These cloaks barely perceptible perturbations images, when used training data, mislead generative try artist....

10.48550/arxiv.2302.04222 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Deep neural networks (DNN) are known to be vulnerable adversarial attacks. Numerous efforts either try patch weaknesses in trained models, or make it difficult costly compute examples that exploit them. In our work, we explore a new "honeypot" approach protect DNN models. We intentionally inject trapdoors, honeypot the classification manifold attract attackers searching for examples. Attackers' optimization algorithms gravitate towards leading them produce attacks similar trapdoors feature...

10.1145/3372297.3417231 preprint EN Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security 2020-10-30

As companies continue to invest heavily in larger, more accurate and robust deep learning models, they are exploring approaches monetize their models while protecting intellectual property. Model licensing is promising, but requires a tool for owners claim ownership of i.e. watermark. Unfortunately, current designs have not been able address piracy attacks, where third parties falsely model by embedding own "pirate watermarks" into an already-watermarked model. We observe that resistance...

10.48550/arxiv.1910.01226 preprint EN cc-by arXiv (Cornell University) 2019-01-01

Advances in deep learning have introduced a new wave of voice synthesis tools, capable producing audio that sounds as if spoken by target speaker. If successful, such tools the wrong hands will enable range powerful attacks against both humans and software systems (aka machines). This paper documents efforts findings from comprehensive experimental study on impact deep-learning based speech human listeners machines speaker recognition voice-signin systems. We find can be reliably fooled...

10.1145/3460120.3484742 article EN public-domain Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security 2021-11-12

The rapid adoption of facial recognition (FR) technology by both government and commercial entities in recent years has raised concerns about civil liberties privacy. In response, a broad suite so-called "anti-facial recognition" (AFR) tools been developed to help users avoid unwanted recognition. set AFR proposed the last few is wide-ranging rapidly evolving, necessitating step back consider broader design space systems long-term challenges. This paper aims fill that gap provides first...

10.1109/sp46215.2023.10179445 article EN 2022 IEEE Symposium on Security and Privacy (SP) 2023-05-01

Learning with Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST [13] based on module LWE [2], and current publicly available PQ Homomorphic Encryption (HE) libraries are ring LWE. security of LWE-based cryptosystems critical, but certain implementation choices could weaken them. One such choice sparse binary secrets, desirable for HE schemes efficiency reasons. Prior work...

10.1145/3576915.3623076 article EN Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security 2023-11-15

Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, "quantum resistant" are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged strong contenders for standardization. In this work, we train transformers perform modular arithmetic combine half-trained models with statistical cryptanalysis techniques propose SALSA: machine learning attack LWE-based...

10.48550/arxiv.2207.04785 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Learning with Errors (LWE) is a hard math problem underlying recently standardized post-quantum cryptography (PQC) systems for key exchange and digital signatures. Prior work proposed new machine learning (ML)-based attacks on LWE problems small, sparse secrets, but these require millions of samples to train take days recover secrets. We propose three methods -- better preprocessing, angular embeddings model pre-training improve attacks, speeding up preprocessing by $25\times$ improving...

10.48550/arxiv.2402.01082 preprint EN arXiv (Cornell University) 2024-02-01

Deep learning systems are known to be vulnerable adversarial examples. In particular, query-based black-box attacks do not require knowledge of the deep model, but can compute examples over network by submitting queries and inspecting returns. Recent work largely improves efficiency those attacks, demonstrating their practicality on today's ML-as-a-service platforms. We propose Blacklight, a new defense against attacks. The fundamental insight driving our design is that, examples, these...

10.48550/arxiv.2006.14042 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Learning with Errors (LWE) is a hard math problem used in post-quantum cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of LWE for their security, and two LWE-based cryptosystems were recently standardized by NIST digital signatures key exchange (KEM). Thus, it critical to continue assessing security specific parameter choices. For example, HE uses secrets small entries, community has considered standardizing sparse improve efficiency functionality. However, prior work,...

10.48550/arxiv.2306.11641 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Server breaches are an unfortunate reality on today's Internet. In the context of deep neural network (DNN) models, they particularly harmful, because a leaked model gives attacker "white-box" access to generate adversarial examples, threat that has no practical robust defenses. For practitioners who have invested years and millions into proprietary DNNs, e.g. medical imaging, this seems like inevitable disaster looming horizon. paper, we consider problem post-breach recovery for DNN models....

10.1145/3548606.3560561 article EN Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security 2022-11-07

Sparse binary LWE secrets are under consideration for standardization Homomorphic Encryption and its applications to private computation. Known attacks on sparse include the dual attack hybrid dual-meet in middle which requires significant memory. In this paper, we provide a new statistical with low memory requirement. The relies some initial lattice reduction. key observation is that, after reduction applied rows of q-ary-like embedded random matrix $\mathbf A$, entries high variance...

10.48550/arxiv.2403.10328 preprint EN arXiv (Cornell University) 2024-03-15

Modular addition is, on its face, a simple operation: given $N$ elements in $\mathbb{Z}_q$, compute their sum modulo $q$. Yet, scalable machine learning solutions to this problem remain elusive: prior work trains ML models that $N \le 6$ mod $q 1000$. Promising applications of for cryptanalysis-which often involve modular arithmetic with large and $q$-motivate reconsideration problem. This proposes three changes the model training pipeline: more diverse data, an angular embedding, custom...

10.48550/arxiv.2410.03569 preprint EN arXiv (Cornell University) 2024-10-04

Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems, and HomomorphicEncryption.org encrypted compute sensitive data. Thus, understanding their concrete security is critical. Most work LWE focuses theoretical estimates of attack performance, which important but may overlook nuances arising in real-world implementations. The sole existing benchmarking effort, Darmstadt Challenge, does not...

10.48550/arxiv.2408.00882 preprint EN arXiv (Cornell University) 2024-08-01

Today, creators of data-hungry deep neural networks (DNNs) scour the Internet for training fodder, leaving users with little control over or knowledge when their data, and in particular images, are used to train models. To empower counteract unwanted use we design, implement evaluate a practical system that enables detect if data was DNN model image classification. We show how can create special images call isotopes, which introduce ``spurious features'' into DNNs during training. With only...

10.56553/popets-2024-0024 article EN cc-by Proceedings on Privacy Enhancing Technologies 2023-10-22
Coming Soon ...