Hierarchical Reinforcement Learning in Complex 3D Environments
Abstraction
DOI:
10.48550/arxiv.2302.14451
Publication Date:
2023-01-01
AUTHORS (6)
ABSTRACT
Hierarchical Reinforcement Learning (HRL) agents have the potential to demonstrate appealing capabilities such as planning and exploration with abstraction, transfer, skill reuse. Recent successes HRL across different domains provide evidence that practical, effective are possible, even if existing do not yet fully realize of HRL. Despite these successes, visually complex partially observable 3D environments remained a challenge for agents. We address this issue Hybrid Offline-Online (H2O2), hierarchical deep reinforcement learning agent discovers learns use options from scratch using its own experience. show H2O2 is competitive strong non-hierarchical Muesli baseline in DeepMind Hard Eight tasks we shed new light on problem environments. Our empirical study reveals previously unnoticed practical challenges brings perspective current understanding domains.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....