Juston Moore

ORCID: 0000-0003-2515-3647
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Network Security and Intrusion Detection
  • Digital Media Forensic Detection
  • Machine Learning in Materials Science
  • Music and Audio Processing
  • Cryptography and Data Security
  • Time Series Analysis and Forecasting
  • Tensor decomposition and applications
  • Advanced Neural Network Applications
  • Image Processing Techniques and Applications
  • Advanced Thermodynamics and Statistical Mechanics
  • Advanced Image Processing Techniques
  • Speech and Audio Processing
  • Robot Manipulation and Learning
  • Machine Learning and Algorithms
  • Face Recognition and Perception
  • Advancements in Semiconductor Devices and Circuit Design
  • Diverse Musicological Studies
  • Cell Image Analysis Techniques
  • Advanced Memory and Neural Computing
  • Machine Learning and Data Classification
  • Green IT and Sustainability
  • Cryptographic Implementations and Security
  • Complex Network Analysis Techniques

Los Alamos National Laboratory
2016-2025

ABSTRACT There are a number of hypotheses underlying the existence adversarial examples for classification problems. These include high‐dimensionality data, high codimension in ambient space data manifolds interest, and that structure machine learning models may encourage classifiers to develop decision boundaries close points. This article proposes new framework studying does not depend directly on distance boundary. Similarly smoothed classifier literature, we define (natural or...

10.1002/sam.11716 article EN cc-by Statistical Analysis and Data Mining The ASA Data Science Journal 2025-01-21

Anomaly detection systems are a promising tool to identify compromised user credentials and malicious insiders in enterprise networks. Most existing approaches for modelling behaviour rely on either independent observations each or pre-defined peer groups. A method is proposed based recommender system algorithms learn overlapping groups use this learned structure detect anomalous activity. Results analysing the authentication process-running activities of thousands users show that can...

10.1109/isi.2016.7745472 article EN 2016-09-01

Bayesian optimization over the latent spaces of deep autoencoder models (DAEs) has recently emerged as a promising new approach for optimizing challenging black-box functions structured, discrete, hard-to-enumerate search (e.g., molecules). Here DAE dramatically simplifies space by mapping inputs into continuous where familiar tools can be more readily applied. Despite this simplification, typically remains high-dimensional. Thus, even with well-suited space, these approaches do not...

10.48550/arxiv.2201.11872 preprint EN cc-by-sa arXiv (Cornell University) 2022-01-01

Distinguishing malicious anomalous activities from unusual but benign is a fundamental challenge for cyber defenders. Prior studies have shown that statistical user behavior analysis yields accurate detections by learning profiles observed activity. These unsupervised models are able to generalize unseen types of attacks detecting deviations normal without knowledge specific attack signatures. However, approaches proposed date based on probabilistic matrix factorization limited the...

10.1145/3519602 article EN Digital Threats Research and Practice 2022-04-12

As the scale of high performance computing facilities approaches exascale era, gaining a detailed understanding hardware failures becomes important. In particular, extreme memory capacity modern supercomputers means that data corruption errors which were statistically negligible at smaller scales will become more prevalent. order to understand faults and mitigate their adverse effects on workloads, we must learn from behavior current hardware. this work, investigate predictability DRAM using...

10.1109/dft.2018.8602983 article EN 2018-10-01

As the attack surfaces of large enterprise networks grow, anomaly detection systems based on statistical user behavior analysis play a crucial role in identifying malicious activities. Previous work has shown that link prediction algorithms non-negative matrix factorization learn highly accurate predictive models actions. However, most have been constructed bipartite graphs, and fail to capture nuanced, multi-faceted details user's activity profile. This paper establishes new benchmark for...

10.1109/isi49825.2020.9280524 article EN 2020-11-09

Neural network verification aims at providing formal guarantees on the output of trained neural networks, to ensure their robustness against adversarial examples and enable deployment in safety-critical applications. This paper introduces a new approach using novel mixed-integer programming rolling-horizon decomposition method. The algorithm leverages layered structure by employing optimization-based bound-tightening smaller sub-graphs original fashion. strategy strikes balance between...

10.48550/arxiv.2401.05280 preprint EN other-oa arXiv (Cornell University) 2024-01-01

There are a number of hypotheses underlying the existence adversarial examples for classification problems. These include high-dimensionality data, high codimension in ambient space data manifolds interest, and that structure machine learning models may encourage classifiers to develop decision boundaries close points. This article proposes new framework studying does not depend directly on distance boundary. Similarly smoothed classifier literature, we define (natural or adversarial) point...

10.48550/arxiv.2404.08069 preprint EN arXiv (Cornell University) 2024-04-11

Industry 4.0 introduced AI as a transformative solution for modernizing manufacturing processes. Its successor, 5.0, envisions humans collaborators and experts guiding these AI-driven solutions. Developing techniques necessitates algorithms capable of safe, real-time identification human positions in scene, particularly their hands, during collaborative assembly. Although substantial efforts have curated datasets hand segmentation, most focus on residential or commercial domains. Existing...

10.48550/arxiv.2407.14649 preprint EN arXiv (Cornell University) 2024-07-19

Deep neural networks (DNNs) are easily fooled by adversarial perturbations that imperceptible to humans. Adversarial training, a process where examples added the training set, is current state-of-the-art defense against attacks, but it lowers model's accuracy on clean inputs, computationally expensive, and offers less robustness natural noise. In contrast, energy-based models (EBMs), which were designed for efficient implementation in neuromorphic hardware physical systems, incorporate...

10.48550/arxiv.2401.11543 preprint EN other-oa arXiv (Cornell University) 2024-01-01

Recent model inversion attack algorithms permit adversaries to reconstruct a neural network's private training data just by repeatedly querying the network and inspecting its outputs. In this work, we develop novel architecture that leverages sparse-coding layers obtain superior robustness class of attacks. Three decades computer science research has studied sparse coding in context image denoising, object recognition, adversarial misclassification settings, but best our knowledge,...

10.48550/arxiv.2403.14772 preprint EN arXiv (Cornell University) 2024-03-21

In this work we utilize generative adversarial networks (GANs) to synthesize realistic transformations for remote sensing imagery in the multispectral domain. Despite apparent perceptual realism of transformed images at a first glance, show that deep learning classifier can very easily be trained differentiate between real and GAN-generated images, likely due subtle but pervasive artifacts introduced by GAN during synthesis process. We also low-amplitude attack fool aforementioned...

10.1117/12.2587753 article EN 2021-04-08

Audio classification aims at recognizing audio signals, including speech commands or sound events. However, current classifiers are susceptible to perturbations and adversarial attacks. In addition, real-world tasks often suffer from limited labeled data. To help bridge these gaps, previous work developed neuro-inspired convolutional neural networks (CNNs) with sparse coding via the Locally Competitive Algorithm (LCA) in first layer (i.e., LCANets) for computer vision. LCANets learn a...

10.48550/arxiv.2308.12882 preprint EN cc-by-nc-nd arXiv (Cornell University) 2023-01-01

We compare the robustness of image classifiers based on state-of-the-art Deep Neural Networks (DNNs) with a model cortical development using single layer sparse coding. The comparison is ability two distinct types to distinguish between faces celebrities from CelebA dataset and synthetic created by ProGAN multi-scale GAN, trained same images. examine DNNs compared coding after addition universal adversarial perturbations (UAPs), which fool most or all DNN we examined. Our results show that...

10.1109/aipr50011.2020.9425143 article EN 2020-10-13
Coming Soon ...