Differentiable MPC for End-to-end Planning and Control
FOS: Computer and information sciences
Computer Science - Machine Learning
0209 industrial biotechnology
Computer Science - Artificial Intelligence
Machine Learning (stat.ML)
02 engineering and technology
Machine Learning (cs.LG)
Artificial Intelligence (cs.AI)
Statistics - Machine Learning
Optimization and Control (math.OC)
FOS: Mathematics
Mathematics - Optimization and Control
DOI:
10.48550/arxiv.1810.13400
Publication Date:
2018-01-01
AUTHORS (5)
ABSTRACT
We present foundations for using Model Predictive Control (MPC) as a differentiable policy class for reinforcement learning in continuous state and action spaces. This provides one way of leveraging and combining the advantages of model-free and model-based approaches. Specifically, we differentiate through MPC by using the KKT conditions of the convex approximation at a fixed point of the controller. Using this strategy, we are able to learn the cost and dynamics of a controller via end-to-end learning. Our experiments focus on imitation learning in the pendulum and cartpole domains, where we learn the cost and dynamics terms of an MPC policy class. We show that our MPC policies are significantly more data-efficient than a generic neural network and that our method is superior to traditional system identification in a setting where the expert is unrealizable.<br/>NeurIPS 2018<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....