Deep Reinforcement Learning with Decorrelation
Decorrelation
DOI:
10.48550/arxiv.1903.07765
Publication Date:
2019-01-01
AUTHORS (3)
ABSTRACT
Learning an effective representation for high-dimensional data is a challenging problem in reinforcement learning (RL). Deep (DRL) such as Q networks (DQN) achieves remarkable success computer games by deeply encoded from convolution networks. In this paper, we propose simple yet very method with DRL algorithms. Our key insight that features learned algorithms are highly correlated, which interferes learning. By adding regularized loss penalizes correlation latent (with only slight computation), decorrelate represented deep neural incrementally. On 49 Atari games, the same regularization factor, our decorrelation perform $70\%$ terms of human-normalized scores, $40\%$ better than DQN. particular, ours performs DQN on 39 4 close ties and lost slightly $6$ games. Empirical results also show applies to Quantile Regression (QR-DQN) significantly boosts performance. Further experiments losing decorelation can win over QR-DQN fined tuned factor.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....