Gavin Taylor

ORCID: 0000-0002-3455-9430
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Stochastic Gradient Optimization Techniques
  • Neural Networks and Applications
  • Advanced Neural Network Applications
  • Reinforcement Learning in Robotics
  • Domain Adaptation and Few-Shot Learning
  • Adaptive Dynamic Programming Control
  • Machine Learning and ELM
  • Sparse and Compressive Sensing Techniques
  • Anomaly Detection Techniques and Applications
  • COVID-19 diagnosis using AI
  • Water resources management and optimization
  • Advanced Graph Neural Networks
  • Advanced Control Systems Optimization
  • Control Systems and Identification
  • Gaussian Processes and Bayesian Inference
  • Smart Grid Energy Management
  • Target Tracking and Data Fusion in Sensor Networks
  • Energy Harvesting in Wireless Networks
  • Advanced Multi-Objective Optimization Algorithms
  • Complex Network Analysis Techniques
  • Model Reduction and Neural Networks
  • Complex Systems and Decision Making
  • Privacy-Preserving Technologies in Data
  • Risk and Safety Analysis

United States Naval Academy
2015-2024

University of Maryland, College Park
2021

William Paterson University
2018

Duke University
2008-2010

Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain architecture designs (e.g., skip connections) produce functions train easier, and well-chosen parameters (batch size, learning rate, optimizer) generalize better. However, the reasons for these differences, their effects underlying landscape, are not well understood. In this paper, we explore structure neural functions, effect landscapes generalization,...

10.48550/arxiv.1712.09913 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Neural network training relies on our ability to find good minimizers of highly non-convex loss functions. It is well known that certain architecture designs (e.g., skip connections) produce functions train easier, and well-chosen parameters (batch size, learning rate, optimizer) generalize better. However, the reasons for these differences, their effect underlying landscape, not understood. In this paper, we explore structure neural functions, landscapes generalization, using a range...

10.3929/ethz-b-000461393 article EN Neural Information Processing Systems 2018-02-15

Adversarial training, in which a network is trained on adversarial examples, one of the few defenses against attacks that withstands strong attacks. Unfortunately, high cost generating examples makes standard training impractical large-scale problems like ImageNet. We present an algorithm eliminates overhead by recycling gradient information computed when updating model parameters. Our "free" achieves comparable robustness to PGD CIFAR-10 and CIFAR-100 datasets at negligible additional...

10.48550/arxiv.1904.12843 preprint EN other-oa arXiv (Cornell University) 2019-01-01

We show that linear value-function approximation is equivalent to a form of model approximation. then derive relationship between the model-approximation error and Bellman error, how this can guide feature selection for improvement and/or improvement. also these results give insight into behavior existing feature-selection algorithms.

10.1145/1390156.1390251 article EN 2008-01-01

Clean-label poisoning attacks inject innocuous looking (and "correctly" labeled) poison images into training data, causing a model to misclassify targeted image after being trained on this data. We consider transferable that succeed without access the victim network's outputs, architecture, or (in some cases) To achieve this, we propose new "polytope attack" in which are designed surround feature space. also demonstrate using Dropout during creation helps enhance transferability of attack....

10.48550/arxiv.1905.05897 preprint EN public-domain arXiv (Cornell University) 2019-01-01

Data poisoning -- the process by which an attacker takes control of a model making imperceptible changes to subset training data is emerging threat in context neural networks. Existing attacks for networks have relied on hand-crafted heuristics, because solving problem directly via bilevel optimization generally thought as intractable deep models. We propose MetaPoison, first-order method that approximates meta-learning and crafts poisons fool MetaPoison effective: it outperforms previous...

10.48550/arxiv.2004.00225 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data enhance performance of GNNs (Graph Neural Networks). While most existing regularizers focus on manipulating topological structures adding/removing edges, we offer a method node features for performance. We propose FLAG (Free Large-scale Adversarial Augmentation Graphs), which iteratively augments with gradient-based adversarial...

10.1109/cvpr52688.2022.00016 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

A recent surge in research kernelized approaches to reinforcement learning has sought bring the benefits of machine techniques learning. Kernelized are fairly new and different authors have approached topic with assumptions goals. Neither a unifying view nor an understanding pros cons yet emerged. In this paper, we offer value function approximation for We show that, except regularization, LSTD (KLSTD) is equivalent modelbased approach that uses regression find approximate reward transition...

10.1145/1553374.1553504 article EN 2009-06-14

With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks. This is largely because conventional optimization algorithms rely on stochastic gradient methods that don't scale well numbers cores in a cluster setting. Furthermore, convergence all methods, including batch suffers from common problems like saturation effects, poor conditioning, saddle points. paper explores an unconventional method uses...

10.48550/arxiv.1605.02026 preprint EN other-oa arXiv (Cornell University) 2016-01-01

Data Poisoning attacks modify training data to maliciously control a model trained on such data. In this work, we focus targeted poisoning which cause reclassification of an unmodified test image and as breach integrity. We consider particularly malicious attack that is both "from scratch" "clean label", meaning analyze successfully works against new, randomly initialized models, nearly imperceptible humans, all while perturbing only small fraction the Previous deep neural networks in...

10.48550/arxiv.2009.02276 preprint EN other-oa arXiv (Cornell University) 2020-01-01

The increasing complexity of deep learning architectures is resulting in training time requiring weeks or even months. This slow due part to "vanishing gradients," which the gradients used by back-propagation are extremely large for weights connecting layers (layers near output layer), and small shallow (near input this results layers. Additionally, it has also been shown that highly non-convex problems, such as neural networks, there a proliferation high-error low curvature saddle points,...

10.1109/icmla.2015.113 article EN 2015-12-01

The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters. We study ADMM methods that boost by using different fine-tuned algorithm parameters each worker node. present a O(1/k) convergence rate adaptive with node-specific parameters, propose consensus (ACADMM), which automatically tunes without user oversight.

10.48550/arxiv.1706.02869 preprint EN other-oa arXiv (Cornell University) 2017-01-01

In order to make better use of deep reinforcement learning in the creation sensing policies for resource-constrained IoT devices, we present and study a novel reward function based on Fisher information value. This enables sensor devices learn spend available energy measurements at otherwise unpredictable moments, while conserving times when would provide little new information. is highly general approach, which allows wide range cases without significant human design effort or...

10.1145/3410992.3411001 preprint EN 2020-10-06

Reinforcement learning (RL) is capable of managing wireless, energy-harvesting IoT nodes by solving the problem autonomous management in non-stationary, resource-constrained settings. We show that state-of-the-art policy-gradient approaches to RL are appropriate for domain and they outperform previous approaches. Due ability model continuous observation action spaces, as well improved function approximation capability, new able solve harder problems, permitting reward functions better...

10.1109/saso.2019.00015 preprint EN 2019-06-01

Approximate dynamic programming has been used successfully in a large variety of domains, but it relies on small set provided approximation features to calculate solutions reliably. Large and rich sets can cause existing algorithms overfit because limited number samples. We address this shortcoming using $L_1$ regularization approximate linear programming. Because the proposed method automatically select appropriate richness features, its performance does not degrade with an increasing...

10.48550/arxiv.1005.1860 preprint EN other-oa arXiv (Cornell University) 2010-01-01
Coming Soon ...