Symbolic State Partitioning for Reinforcement Learning
FOS: Computer and information sciences
Computer Science - Machine Learning
Artificial Intelligence (cs.AI)
Computer Science - Artificial Intelligence
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2409.16791
Publication Date:
2024-09-25
AUTHORS (4)
ABSTRACT
Tabular reinforcement learning methods cannot operate directly on continuous state spaces. One solution for this problem is to partition the space. A good partitioning enables generalization during and more efficient exploitation of prior experiences. Consequently, process becomes faster produces reliable policies. However, introduces approximation, which particularly harmful in presence nonlinear relations between components. An ideal should be as coarse possible, while capturing key structure space given problem. This work extracts partitions from environment dynamics by symbolic execution. We show that improves coverage with respect environmental behavior allows perform better sparse rewards. evaluate precision, scalability, agent performance learnt
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....