- Probabilistic and Robust Engineering Design
- Model Reduction and Neural Networks
- Advanced Control Systems Optimization
- Stochastic processes and financial applications
- Markov Chains and Monte Carlo Methods
- Reinforcement Learning in Robotics
- Mathematical Biology Tumor Growth
- Robotic Path Planning Algorithms
- Gaussian Processes and Bayesian Inference
- Metaheuristic Optimization Algorithms Research
- Numerical methods for differential equations
- Control Systems and Identification
- Adaptive Dynamic Programming Control
- Advanced Multi-Objective Optimization Algorithms
- Robot Manipulation and Learning
- Advanced Thermodynamics and Statistical Mechanics
- Gene Regulatory Network Analysis
- Soft Robotics and Applications
- Modular Robots and Swarm Intelligence
- Optimization and Search Problems
- Machine Learning and Algorithms
- Control and Stability of Dynamical Systems
- Iterative Learning Control Systems
- Optimization and Variational Analysis
- Hand Gesture Recognition Systems
GE Global Research (United States)
2023
Georgia Institute of Technology
2016-2021
General Electric (United States)
2020-2021
National Technical University of Athens
2014
Differential Dynamic Programming (DDP) has become a well established method for unconstrained trajectory optimization. Despite its several applications in robotics and controls, however, widely successful constrained version of the algorithm yet to be developed. This paper builds upon penalty methods active-set approaches towards designing Programming-based methodology optimal control. Regarding former, our derivation employs Bellman's principle optimality, by introducing set auxiliary slack...
We present a trajectory optimization approach to reinforcement learning in continuous state and action spaces, called probabilistic differential dynamic programming (PDDP). Our method represents systems dynamics using Gaussian processes (GPs), performs local iteratively around nominal belief spaces. Different from model-based policy search methods, PDDP does not require parameterization learns time-varying control via successive forward-backward sweeps. A convergence analysis of the...
In this paper, we propose a complete methodology for deriving task-specific force closure grasps multifingered robot hands under wide range of uncertainties. Given finite set external disturbances representing the task to be executed, concept Q distance is introduced in novel way determine an efficient grasp with compatible hand posture (i.e., configuration and contact points). Our approach takes, also, into consideration mechanical geometric limitations imposed by robotic design object...
In this paper we develop a novel optimal control framework for uncertain mechanical systems. Our work extends differential dynamic programming and handles uncertainty through generalized polynomial chaos (gPC) theory. The obtained scheme is able to influence the probabilistic evolution of nonlinear systems with stochastic model parameters. Its scalable, fast-converging nature plays key role when dealing gPC expansions high-dimensional problems. Based on Lagrangian principles, also prove that...
We propose a sampling-based trajectory optimization methodology for constrained problems. extend recent works on stochastic search to deal with box control constraints, as well nonlinear state constraints discrete dynamical systems. Regarding the former, our strategy is optimize over truncated parameterized distributions inputs. Furthermore, we show how non-smooth penalty functions can be incorporated into framework handle constraints. Simulations cartpole and quadcopter that approach...
In this paper a novel method towards solving the Stochastic Optimal Control problem is proposed, which based on combination of generalized Polynomial Chaos theory and Differential Dynamic Programming framework. Utilizing allows us to handle wide range uncertainties, without having rely limiting assumptions regarding form stochasticity. addition, framework provides an iterative algorithm for finding optimal controls, attains scalability and, under mild assumptions, fast convergence. Last but...
Differential Dynamic Programming (DDP) has become a well established method for unconstrained trajectory optimization. Despite its several applications in robotics and controls however, widely successful constrained version of the algorithm yet to be developed. This paper builds upon penalty methods active-set approaches, towards designing Programming-based methodology optimal control. Regarding former, our derivation employs Bellman's principle optimality, by introducing set auxiliary slack...
We develop a discrete-time optimal control framework for systems evolving on Lie groups. Our article generalizes the original differential dynamic programming method, by employing coordinate-free, Lie-theoretic approach its derivation. A key element lies, specifically, in use of quadratic expansion schemes cost functions and dynamics defined The obtained algorithm iteratively optimizes local approximations problem, until reaching (sub)optimal solution. On theoretical side, we also study...
The majority of the works on grasping consider both object as well robot hand parameters to be accurately known and do not take into account constraints imposed by hand. In this paper, a complete methodology is proposed that handles problem under wide range uncertainties. Initially, we search for an acceptable posture provides robustness against positioning inaccuracies maximizes ability exert forces object. Subsequently, in order secure grasp stability, also deal with determination...
In this paper we investigate whether the linearly solvable stochastic optimal control framework generalizes to case of differential equations in infinite dimensional spaces. particular, show that connection between relative entropy-free energy relation and dynamic programming principles caries over Our analysis is based on a generalization Feynman-Kac lemma for certain classes diffusions Hilbert space-valued Q-Wiener processes. We observe utilized information theoretic representation allows...
Path Integral control theory yields a sampling-based methodology for solving stochastic optimal problems. Motivated by its computational efficiency, we extend this framework to account systems evolving on Lie groups. Our derivation relies recursive mappings between system poses and corresponding algebra elements. This allows us apply standard facts from calculus, obtain expressions analogous those of Euclidean The results imply that can be applied in parameterization-free manner, even when...
In this paper we develop a novel, discrete-time optimal control framework for mechanical systems with uncertain model parameters. We consider finite-horizon problems where the performance index depends on statistical moments of stochastic system. Our approach constitutes an extension original Differential Dynamic Programming method and handles uncertainty through generalized Polynomial Chaos (gPC) theory. The developed iterative scheme is capable controlling probabilistic evolution dynamic...
This paper develops a variational inference framework for control of infinite dimensional stochastic systems. We employ measure theoretic approach which relies on the generalization Girsanov's theorem, as well relation between relative entropy and free energy. The derived scheme is applicable to large class stochastic, systems, can be used trajectory optimization model predictive control. Our work opens up new research avenues at intersection control, information theory dynamical systems...
We propose a novel methodology for stochastic trajectory optimization which is based on merging the theory of spectral expansions with Differential Dynamic Programming. Specifically, we employ polynomial chaos to handle parametric uncertainties and utilize Karhunen-Loéve transformation represent forces. This allows us build generic framework avoid relying restrictive assumptions regarding form stochasticity. In addition, Programming provides an iterative algorithm finding optimal controls...
We develop a discrete-time optimal control framework for systems evolving on Lie groups. Our work generalizes the original Differential Dynamic Programming method, by employing coordinate-free, Lie-theoretic approach its derivation. A key element lies, specifically, in use of quadratic expansion schemes cost functions and dynamics defined manifolds. The obtained algorithm iteratively optimizes local approximations problem, until reaching (sub)optimal solution. On theoretical side, we also...
Systems involving Partial Differential Equations (PDEs) have recently become more popular among the machine learning community. However prior methods usually treat infinite dimensional problems in finite dimensions with Reduced Order Models. This leads to committing specific approximation schemes and subsequent derivation of control laws. Additionally, work does not consider spatio-temporal descriptions noise that realistically represent stochastic nature physical systems. In this paper we...
Wake controls in wind farms has evolved significantly the last twenty years, motivated mainly by its potential to increase annual energy production (AEP) through reduction of wake losses. Engineering models that characterize wakes farm have enhanced fidelity and computational efficiency. Computational environments been developed adjust turbine control settings based on these reduce impact wakes. Several experimental campaigns carried out validate predictions. Yet, results typically shown...