Mobile Robot Path Planning Based on Improved Reinforcement Learning Optimization

0202 electrical engineering, electronic engineering, information engineering 02 engineering and technology
DOI: 10.1145/3366715.3366717 Publication Date: 2019-12-26T21:08:02Z
ABSTRACT
The constant parameter is usually set in adaptive function with traditional mobile robot path planning problem. Q-learning, a type of reinforcement learning, has gained increasing popularity in autonomous mobile robot path recently. In order to effectively solve mobile robot path planning problem in obstacle avoidance environment, a path planning model and search algorithm based on improved reinforcement learning are proposed. The incentive model of reinforcement learning mechanism is introduced with search selection strategy, modifying dynamic reward function parameter setting. The group intelligent search iterative process of global position selection and local position selection is exploited to combine particle behavior with reinforcement learning algorithm, dynamically adjusting the empirical parameter of the reward function by strengthening the data training experiment of Q-learning. to determine the constant parameters for simulation experiment, once the distance between the robot and the obstacle is less than a certain thresholds value, the 0-1 random number is used to randomly adjust the moving direction, avoiding the occurrence of mobile robot path matching deadlock. The study case shows that the proposed algorithm is proved to be better efficient and effective, thereby improving the search intensity and accuracy of the mobile robot path planning problem. And the experimental simulation shows that the proposed model and algorithm effectively solve mobile robot path planning problem that the parameter selection and the actual scene cannot be adapted in real time in traditional path planning problem.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (18)
CITATIONS (4)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....