Planning with Abstract Learned Models While Learning Transferable Subtasks

Abstraction
DOI: 10.1609/aaai.v34i06.6555 Publication Date: 2020-06-29T18:58:21Z
ABSTRACT
We introduce an algorithm for model-based hierarchical reinforcement learning to acquire self-contained transition and reward models suitable probabilistic planning at multiple levels of abstraction. call this framework Planning with Abstract Learned Models (PALM). By representing subtasks symbolically using a new formal structure, the lifted abstract Markov decision process (L-AMDP), PALM learns that are independent modular. Through our experiments, we show how integrates execution, facilitating rapid efficient abstract, models. also demonstrate increased potential learned be transferred related tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (4)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....