Model-based Bayesian Reinforcement Learning in Factored Markov Decision Process
0101 mathematics
01 natural sciences
DOI:
10.4304/jcp.9.4.845-850
Publication Date:
2014-04-20T05:33:48Z
AUTHORS (3)
ABSTRACT
Learning the enormous number of parameters is a challenging problem in model-based Bayesian reinforcement learning. In order to solve the problem, we propose a model-based factored Bayesian reinforcement learning (F-BRL) approach. F-BRL exploits a factored representation to describe states to reduce the number of parameters. Representing the conditional independence relationships between state features using dynamic Bayesian networks, F-BRL adopts Bayesian inference method to learn the unknown structure and parameters of the Bayesian networks simultaneously. A point-based online value iteration approach is then used for planning and learning online. The experimental and simulation results show that the proposed approach can effectively reduce the number of learning parameters, and enable online learning for dynamic systems with thousands of states.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (22)
CITATIONS (2)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....