Taylor J. Carpenter

ORCID: 0000-0002-2397-871X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Fault Detection and Control Systems
  • Formal Methods in Verification
  • Anomaly Detection Techniques and Applications
  • Software Testing and Debugging Techniques
  • Context-Aware Activity Recognition Systems
  • Human-Automation Interaction and Safety
  • Real-time simulation and control systems
  • Robotics and Automated Systems
  • AI-based Problem Solving and Planning
  • Model Reduction and Neural Networks
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Intelligent Tutoring Systems and Adaptive Learning
  • Innovative Teaching and Learning Methods
  • Software Reliability and Analysis Research
  • Risk and Safety Analysis
  • Semantic Web and Ontologies
  • Bacillus and Francisella bacterial research
  • Advanced Neural Network Applications
  • Occupational Health and Safety Research
  • Cultural Heritage Management and Preservation
  • Social Robot Interaction and HRI
  • Online Learning and Analytics
  • Safety Systems Engineering in Autonomy
  • Explainable Artificial Intelligence (XAI)

University of Pennsylvania
2019-2022

California University of Pennsylvania
2020-2021

Institute of Electrical and Electronics Engineers
2020

Gorgias Press (United States)
2020

Vrije Universiteit Brussel
2020

CHI Systems (United States)
2017-2018

Rochester Institute of Technology
2015-2017

This paper describes a verification case study on an autonomous racing car with neural network (NN) controller. Although several approaches have been recently proposed, they only evaluated low-dimensional systems or constrained environments. To explore the limits of existing approaches, we present challenging benchmark in which NN takes raw LiDAR measurements as input and outputs steering for car. We train dozen NNs using reinforcement learning (RL) show that state art can handle around 40...

10.1145/3365365.3382216 article EN 2020-04-22

This article addresses the problem of verifying safety autonomous systems with neural network (NN) controllers. We focus on NNs sigmoid/tanh activations and use fact that is solution to a quadratic differential equation. allows us convert NN into an equivalent hybrid system cast as verification problem, which can be solved by existing tools. Furthermore, we improve scalability proposed method approximating sigmoid Taylor series worst-case error bounds. Finally, provide evaluation over four...

10.1145/3419742 article EN ACM Transactions on Embedded Computing Systems 2020-12-07

This report presents the results of a friendly competition for formal verification continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical (CPS), such as feedforward neural networks used feedback controllers closed-loop are considered, which is class classically known intelligent control systems, or more modern specific terms, network (NNCS). We broadly refer to this category AI NNCS (AINNCS). The took place...

10.29007/kfk9 article EN EPiC series in computing 2021-12-06

Autonomous systems operating in uncertain environments under the effects of disturbances and noises can reach unsafe states even while using finetuned controllers precise sensors actuators. To provide safety guarantees on such during motion planning operations, reachability analysis (RA) has been demonstrated to be a powerful tool. RA, however, suffers from computational complexity, especially when dealing with intricate characterized by high-order dynamics, making it hard deploy for runtime...

10.1109/mra.2020.2981114 article EN publisher-specific-oa IEEE Robotics & Automation Magazine 2020-04-15

This report presents the results of a friendly competition for formal verification continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical (CPS), such as feedforward neural networks used feedback controllers closed-loop are considered, which is class classically known intelligent control systems, or more modern specific terms, network (NNCS). For future iterations, we broadly refer to this category AI NNCS...

10.29007/rgv8 article EN EPiC series in computing 2019-06-05

This paper describes a verification case study on an autonomous racing car with neural network (NN) controller. Although several approaches have been proposed over the last year, they only evaluated low-dimensional systems or constrained environments. To explore limits of existing approaches, we present challenging benchmark in which NN takes raw LiDAR measurements as input and outputs steering for car. We train dozen NNs using two reinforcement learning algorithms show that state art can...

10.48550/arxiv.1910.11309 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Closed-loop verification of cyberphysical systems with neural network controllers offers strong safety guarantees under certain assumptions. It is, however, difficult to determine whether these guar-antees apply at run time because assumptions may be violated. To predict violations in a verified system, we propose three-step confidence composition (CoCo) framework for monitoring First, represent the sufficient condition propositional logical formula over Second, build calibrated monitors...

10.1109/iccps54341.2022.00007 preprint EN 2022-05-01

Deep neural network (DNN) models have proven to be vulnerable adversarial digital and physical attacks. In this paper, we propose a novel attack- dataset-agnostic real-time detector for both types of inputs DNN-based perception systems. particular, the proposed relies on observation that images are sensitive certain label-invariant transformations. Specifically, determine if an image has been adversarially manipulated, checks output target classifier given input changes significantly after...

10.1145/3450267.3450535 article EN 2021-04-01

Introduction: ICR has been used for the treatment of patients recovering from a myocardial infarction, cardiac surgery, percutaneous intervention, and/or stable angina. Pritikin is comprehensive program whereby follow structured diet and exercise program. The first outpatient was implemented at Barnes-Jewish Hospital/Washington University School Medicine. While domiciled Longevity Center show marked improvements in several cardiovascular disease (CVD) risk factors, effects are unknown....

10.1161/circ.139.suppl_1.p325 article EN Circulation 2019-03-05

While efforts to develop cognitive abilities for robots have made progress from the perspective of goal-directed task performance, research has shown that additional capabilities are needed enable interact, cooperate, and act as teammates with humans. In particular, need teamwork coordination knowledge an ability apply this a model context is at least homologous models people use in reasoning about environmental interactions. The Context-Augmented Robotic Interface Layer (CARIL) provides...

10.1109/cogsima.2017.7929596 article EN 2017-03-01

This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models. Although techniques exist the of many classes models, majority these methods cannot be applied whole which includes neural network Additionally, existing generally consider only white-box By taking approach, we can address black-box represented by an input-output relationship and Lipschitz constant. We show that randomly sampling from parameter space evaluating model, it is...

10.1016/j.ifacol.2021.08.471 article EN IFAC-PapersOnLine 2021-01-01

Deep neural network (DNN) models have proven to be vulnerable adversarial digital and physical attacks. In this paper, we propose a novel attack- dataset-agnostic real-time detector for both types of inputs DNN-based perception systems. particular, the proposed relies on observation that images are sensitive certain label-invariant transformations. Specifically, determine if an image has been adversarially manipulated, checks output target classifier given input changes significantly after...

10.48550/arxiv.2002.09792 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Closed-loop verification of cyber-physical systems with neural network controllers offers strong safety guarantees under certain assumptions. It is, however, difficult to determine whether these apply at run time because assumptions may be violated. To predict violations in a verified system, we propose three-step confidence composition (CoCo) framework for monitoring First, represent the sufficient condition propositional logical formula over Second, build calibrated monitors that evaluate...

10.48550/arxiv.2111.03782 preprint EN other-oa arXiv (Cornell University) 2021-01-01

This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models. Although techniques exist the of many classes models majority these methods cannot be applied whole models, which includes neural network Additionally, existing generally consider only white-box By taking approach, we can address black-box represented by an input-output relationship and Lipschitz constant. We show that randomly sampling from parameter space evaluating model,...

10.48550/arxiv.2104.15006 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...