DM-DQN: Dueling Munchausen deep Q network for robot path planning

Obstacle avoidance
DOI: 10.1007/s40747-022-00948-7 Publication Date: 2022-12-30T10:14:45Z
ABSTRACT
Abstract In order to achieve collision-free path planning in complex environment, Munchausen deep Q-learning network (M-DQN) is applied mobile robot learn the best decision. On basis of Soft-DQN, M-DQN adds scaled log-policy immediate reward. The method allows agent do more exploration. However, algorithm has problem slow convergence. A new and improved (DM-DQN) proposed paper address problem. First, its structure was on by decomposing into a value function an advantage function, thus decoupling action selection evaluation speeding up convergence, giving it better generalization performance enabling decision faster. Second, robot’s trajectory being too close edge obstacle, using artificial potential field set reward drive away from vicinity obstacle. result simulation experiment shows that learns efficiently converges faster than DQN, Dueling DQN both static dynamic environments, able plan paths obstacles.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (31)
CITATIONS (25)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....