Human and environmental feature-driven neural network for path-constrained robot navigation using deep reinforcement learning
Motion and path planning
Reinforcement learning
TA1-2040
Engineering (General). Civil engineering (General)
Path constraint
Autonomous robot navigation
DOI:
10.1016/j.jestch.2025.101993
Publication Date:
2025-02-25T17:26:39Z
AUTHORS (9)
ABSTRACT
This paper introduces a neural network model designed for autonomous navigation in complex environments. It combines DRL methodologies to capture critical environmental features in the neural network. These features encompass data about the robot, humans, static obstacles, and path constraints. The representation, combined with weighted features from humans and environmental limitations, is processed through three multi-layer perceptrons (MLP) to calculate the value function and optimal policy, thereby enhancing navigation tasks. A novel reward function is proposed to accommodate path constraints and steer the robot’s navigation policies during neural network training. Additionally, common metrics like success rate, collision avoidance, time to reach the goal, and new comprehensive log information are included to provide an overview of the robot’s performance. The model’s efficacy is demonstrated through navigation in simulation scenarios involving curved and cross pathways, with the agents’ random position and velocity occasionally exceeding the maximum robot speed, as well as real experiments in limited spaces. The paper provides a GitHub repository that includes comparative performance videos with state-of-the-art models in path-constrained scenarios, along with strategies for reward functions. Link: https://github.com/nabihandres/Wallproximity_DRL.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (28)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....