Frank L. Lewis

ORCID: 0000-0003-4074-1615
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adaptive Control of Nonlinear Systems
  • Adaptive Dynamic Programming Control
  • Distributed Control Multi-Agent Systems
  • Reinforcement Learning in Robotics
  • Neural Networks Stability and Synchronization
  • Advanced Control Systems Optimization
  • Stability and Control of Uncertain Systems
  • Neural Networks and Applications
  • Iterative Learning Control Systems
  • Control Systems and Identification
  • Petri Nets in System Modeling
  • Microgrid Control and Optimization
  • Nonlinear Dynamics and Pattern Formation
  • Fault Detection and Control Systems
  • Mechanical Circulatory Support Devices
  • Smart Grid Energy Management
  • Frequency Control in Power Systems
  • Control and Dynamics of Mobile Robots
  • Fuzzy Logic and Control Systems
  • Scheduling and Optimization Algorithms
  • Dynamics and Control of Mechanical Systems
  • Control and Stability of Dynamical Systems
  • Robotic Path Planning Algorithms
  • Numerical methods for differential equations
  • Mathematical and Theoretical Epidemiology and Ecology Models

The University of Texas at Arlington
2016-2025

Robotics Research (United States)
2012-2022

Botanical Research Institute of Texas
2014-2022

University of California, Berkeley
2021

National University of Singapore
2021

Northeastern University
2014-2020

Mitsubishi Electric (Japan)
2020

New York University
2018-2019

Yan'an University
2018-2019

Guangdong University of Technology
2018-2019

Equations of Motion Building the Aircraft Model Basic Analytical and Computational Tools Dynamics Classical Design Techniques Modern Robustness Multivariable Frequency-Domain Digital Control Appendices Index.

10.1108/aeat.2004.12776eae.001 article EN Aircraft Engineering and Aerospace Technology 2004-10-01

Living organisms learn by acting on their environment, observing the resulting reward stimulus, and adjusting actions accordingly to improve reward. This action-based or reinforcement learning can capture notions of optimal behavior occurring in natural systems. We describe mathematical formulations for a practical implementation method known as adaptive dynamic programming. These give us insight into design controllers man-made engineered systems that both exhibit behavior.

10.1109/mcas.2009.933854 article EN IEEE Circuits and Systems Magazine 2009-01-01

10.1007/bf01600184 article EN Circuits Systems and Signal Processing 1986-03-01

A multilayer neural-net (NN) controller for a general serial-link rigid robot arm is developed. The structure of the NN derived using filtered error/passivity approach. No off-line learning phase needed proposed and weights are easily initialized. nonlinear nature NN, plus functional reconstruction inaccuracies disturbances, mean that standard delta rule backpropagation tuning does not suffice closed-loop dynamic control. Novel online weight algorithms, including correction terms to an added...

10.1109/72.485674 article EN IEEE Transactions on Neural Networks 1996-03-01

Convergence of the value-iteration-based heuristic dynamic programming (HDP) algorithm is proven in case general nonlinear systems. That is, it shown that HDP converges to optimal control and value function solves Hamilton-Jacobi-Bellman equation appearing infinite-horizon discrete-time (DT) control. It assumed that, at each iteration, action update equations can be exactly solved. The following two standard neural networks (NN) are used: a critic NN used approximate function, whereas an...

10.1109/tsmcb.2008.926614 article EN IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics) 2008-07-24

This article describes the use of principles reinforcement learning to design feedback controllers for discrete- and continuous-time dynamical systems that combine features adaptive control optimal control. Adaptive [1], [2] [3] represent different philosophies designing controllers. Optimal are normally designed ine by solving Hamilton JacobiBellman (HJB) equations, example, Riccati equation, using complete knowledge system dynamics. Determining policies nonlinear requires offline solution...

10.1109/mcs.2012.2214134 article EN IEEE Control Systems 2012-11-16

This technical note studies synchronization of identical general linear systems on a digraph containing spanning tree. A leader node or command generator is considered, which generates the desired tracking trajectory. framework for cooperative control proposed, including full state feedback control, observer design and dynamic output control. The classical system theory notion duality extended to networked systems. It shown that unbounded regions achieve arbitrary digraphs tree can be...

10.1109/tac.2011.2139510 article EN IEEE Transactions on Automatic Control 2011-04-08

A cooperative control paradigm is used to establish a distributed secondary/primary framework for dc microgrids. The conventional secondary control, that adjusts the voltage set point local droop mechanism, replaced by regulator and current regulator. noise-resilient observer introduced uses neighbors' data estimate average across microgrid. processes this estimation generates correction term adjust point. This adjustment maintains microgrid level as desired tertiary control. compares...

10.1109/tpel.2014.2324579 article EN publisher-specific-oa IEEE Transactions on Power Electronics 2014-05-16

This paper proposes a secondary voltage control of microgrids based on the distributed cooperative multi-agent systems. The proposed is fully distributed; each generator only requires its own information and some neighbors. structure obviates requirements for central controller complex communication network which, in turn, improves system reliability. Input-output feedback linearization used to convert linear second-order tracker synchronization problem. parameters can be tuned obtain...

10.1109/tpwrs.2013.2247071 article EN IEEE Transactions on Power Systems 2013-03-06

This paper reviews the current state of art on reinforcement learning (RL)-based feedback control solutions to optimal regulation and tracking single multiagent systems. Existing RL both problems, as well graphical games, will be reviewed. methods learn solution game problems online using measured data along system trajectories. We discuss Q-learning integral algorithm core algorithms for discrete-time (DT) continuous-time (CT) systems, respectively. Moreover, we a new direction off-policy...

10.1109/tnnls.2017.2773458 article EN publisher-specific-oa IEEE Transactions on Neural Networks and Learning Systems 2017-12-07

A control structure that makes possible the integration of a kinematic controller and neural network (NN) computed-torque for nonholonomic mobile robots is presented. combined kinematic/torque law developed using backstepping stability guaranteed by Lyapunov theory. This algorithm can be applied to three basic navigation problems: tracking reference trajectory, path following, stabilization about desired posture. Moreover, NN proposed in this work deal with unmodeled bounded disturbances...

10.1109/72.701173 article EN IEEE Transactions on Neural Networks 1998-07-01

A dynamical extension that makes possible the integration of a kinematic controller and torque for nonholonomic mobile robots is presented. combined kinematic/torque control law developed using backstepping asymptotic stability guaranteed by Lyapunov theory. Moreover, this algorithm can be applied to three basic navigation problems: tracking reference trajectory, path following stabilization about desired posture. general structure controlling robot results accommodate different techniques...

10.1109/cdc.1995.479190 article EN 2002-11-19

A neural net (NN) controller for a general serial-link robot arm is developed. The NN has two layers so that linearity in the parameters holds, but "net functional reconstruction error" and disturbance input are taken as nonzero. structure of derived using filtered error/passivity approach, leading to new passivity properties. Online weight tuning algorithms including correction term backpropagation, plus an added robustifying signal, guarantee tracking well bounded weights. outer loop...

10.1109/72.377975 article EN IEEE Transactions on Neural Networks 1995-05-01
Coming Soon ...