- Aerospace and Aviation Technology
- Air Traffic Management and Optimization
- Autonomous Vehicle Technology and Safety
- Fault Detection and Control Systems
- Robotic Path Planning Algorithms
- Reinforcement Learning in Robotics
- Target Tracking and Data Fusion in Sensor Networks
- Smart Grid Security and Resilience
- Human-Automation Interaction and Safety
- Adaptive Control of Nonlinear Systems
Harbin Institute of Technology
2021-2025
A fixed-wing aircraft can be in the final phase of a potential collision with non-cooperative dynamic obstacle (e.g., drone) because limited sensing range. In collision, performance existing avoidance approaches that do not take into account bounded and non-isotropic maneuver capability aerodynamic characteristics is limited. To enhance this study develops hierarchical Reinforcement Learning (RL)-based strategy. The RL-based strategy learns high-level navigator provides velocity vector to...
An increasing number of Autonomous Mobile Robots (AMRs) are used in warehouses and factories recent years. The risk some the AMRs being out control is surging. Although Reinforcement Learning (RL)-based approaches have achieved dramatic success motion planning a large AMRs, available RL-based cannot provide safety guarantee for remaining functional if control. To this end, paper develops scalable Multi-agent RL (MARL) with Control Barrier Function (CBF)-based shields algorithm. MARL...
Aircraft upset situations are the highest risk to civil aviation. Thus, a reliable recovery policy is necessary for aircraft. In this paper, two-stage strategy achieve reinforcement learning (RL)-based that takes time of and loss altitude into account proposed aircraft recover from an arbitration situation level flight. Based on Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, algorithm TD3-based developed. Experiments conducted based X-Plane 11 evaluate effectiveness...
Researchers have made many attempts to apply reinforcement learning (RL) learn fly aircraft in recent years. However, existing RL strategies are usually not safe (e.g., can lead crash) the initial stage of training an RL-based policy. For increasingly complex piloting tasks whose representative models hard establish, it is policy by interacting with aircraft. To enhance safety and feasibility applying aircraft, this study develops offline–online strategy. The strategy learns effective...
Obstacle avoidance is a crucial issue to enhance the safety of aircraft. Aircraft usually need avoid drop in altitude and keep set course while avoiding an obstacle. In this paper, hierarchical obstacle strategy proposed address avoidance, altitude, keeping simultaneously. The integrates high-level reinforcement learning-based navigator low-level attitude controller. Experiments are conducted X-Plane, flight simulator, evaluate strategy.