- Autonomous Vehicle Technology and Safety
- Advanced Neural Network Applications
- Network Packet Processing and Optimization
- Video Surveillance and Tracking Methods
- Network Security and Intrusion Detection
- Remote Sensing and LiDAR Applications
- Robotics and Sensor-Based Localization
- Advanced Image and Video Retrieval Techniques
- Internet Traffic Analysis and Secure E-voting
- Industrial Vision Systems and Defect Detection
- Robotic Path Planning Algorithms
- Algorithms and Data Compression
- Advanced Optical Sensing Technologies
- Traffic control and management
- Traffic and Road Safety
- Visual Attention and Saliency Detection
- 3D Surveying and Cultural Heritage
- Human-Automation Interaction and Safety
- Vehicle Dynamics and Control Systems
- Traffic Prediction and Management Techniques
- Advanced Vision and Imaging
- 3D Shape Modeling and Analysis
- Infrastructure Maintenance and Monitoring
- Industrial Technology and Control Systems
- Wildlife-Road Interactions and Conservation
Tsinghua University
2016-2025
State Key Laboratory of Vehicle NVH and Safety Technology
2024
China University of Geosciences
2023
Chengdu University
2022
Sun Yat-sen University
2022
University of Oxford
2013
Xinyang College of Agriculture and Forestry
2011
Jiujiang University
2009
Dalian University of Technology
2009
Zhejiang Wanli University
2007
Autonomous vehicles require constant environmental perception to obtain the distribution of obstacles achieve safe driving. Specifically, 3D object detection is a vital functional module as it can simultaneously predict surrounding objects' categories, locations, and sizes. Generally, autonomous are equipped with multiple sensors, including cameras LiDARs. The fact that single-modal methods suffer from unsatisfactory performance motivates utilizing modalities inputs compensate for single...
Abstract Defined traffic laws must be respected by all vehicles when driving on the road, including self-driving without human drivers. Nevertheless, ambiguity of human-oriented laws, particularly compliance thresholds, poses a significant challenge to implementation regulations vehicles, especially in detecting illegal behaviors. To address these challenges, here we present trigger-based hierarchical online monitor for self-assessment behavior, which aims improve rationality and real-time...
3D Multi-object tracking (MOT) ensures consistency during continuous dynamic detection, conducive to subsequent motion planning and navigation tasks in autonomous driving. However, camera-based methods suffer the case of occlusions it can be challenging track irregular objects for LiDAR-based accurately. Some fusion work well but do not consider untrustworthy issue appearance features under occlusion. At same time, false detection problem also significantly affects tracking. As such, we...
While most recent autonomous driving system focuses on developing perception methods ego-vehicle sensors, people tend to overlook an alternative approach leverage intelligent roadside cameras extend the ability beyond visual range. We discover that state-of-the-art vision-centric bird's eye view detection have inferior performances cameras. This is because these mainly focus recovering depth regarding camera center, where difference between car and ground quickly shrinks while distance...
In this work, we introduce FARP-Net, an adaptive local-global feature aggregation and relation-aware proposal network for high-quality 3D object detection from pure point clouds. Our key insight is that learning irregular yet sparse cloud generating superb proposals are both pivotal detection. Technically, propose a novel layer (LGFAL) fully exploits the complementary correlation between local features global features, fuses their strengths adaptively via attention-based fusion module....
Autonomous driving is considered one of the revolutionary technologies shaping humanity’s future mobility and quality life. However, safety remains a critical hurdle in way commercialization widespread deployment autonomous vehicles on public roads. Safety concerns require system to handle uncertainties from multiple sources that are either preexisting, e.g., stochastic behavior traffic participants or scenario occlusion, introduced as result processing, application neural networks. Thus, it...
Amphibious ground-aerial vehicles fuse flying and driving modes to enable more flexible air-land mobility have received growing attention recently. By analyzing the existing amphibious vehicles, we highlight autonomous fly-driving functionality for effective uses of in complex three-dimensional urban transportation systems. We review summarize key enabling technologies intelligent flying-driving vehicle designs, identify major technological barriers propose potential solutions future...
Multi-modal fusion overcomes the inherent limitations of single-sensor perception in 3D object detection autonomous driving. The 4D Radar and LiDAR can boost range more robust. Nevertheless, different data characteristics noise distributions between two sensors hinder performance improvement when directly integrating them. Therefore, we are first to propose a novel method termed <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML"...
Semi-supervised learning (SSL) has promising potential for improving model performance using both labelled and unlabelled data. Since recovering 3D information from 2D images is an ill-posed problem, the current state-of-the-art methods of monocular object detection (Mono3D) have relatively low precision recall, making semi-supervised Mono3D tasks challenging understudied. In this work, we propose a unified effective framework called Mix-Teaching that can be applied to most detectors. Based...
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' attention in recent years. The current multi-modal methods mainly focus on camera data and LiDAR data, but pay little to the kinematic information provided by sensors vehicle, such as acceleration, vehicle speed, angle rotation. These are not affected complex external scenes, so it more robust reliable. In this article, we introduce existing application fields research progress...
Cooperative perception is challenging for safety-critical autonomous driving applications. The errors in the shared position and pose cause an inaccurate relative transform estimation disrupt robust mapping of Ego vehicle. We propose a distributed object-level cooperative system called OptiMatch, which detected 3D bounding boxes local state information are between connected vehicles. To correct noisy transform, measurements both vehicles (bounding boxes) utilized, optimal transport...
Motion prediction is essential for safe and efficient autonomous driving. However, the inexplicability uncertainty of complex artificial intelligence models may lead to unpredictable failures motion module, which mislead system make unsafe decisions. Therefore, it necessary develop methods guarantee reliable driving, where failure detection a potential direction. Uncertainty estimates can be used quantify degree confidence model has in its predictions valuable detection. We propose framework...
In the field of autonomous driving, relying solely on environmental information captured by single vehicle's LiDAR often falls short perception requirements in complex scenarios. Adopting a vehicle-infrastructure cooperation approach can effectively address limitations. Therefore, development an algorithm that register heterogeneous point clouds is crucial. This paper proposes cooperative cloud registration network (TEA-IHPCR) based temporal augmented and rotation invariance. By capturing...
While most recent autonomous driving system focuses on developing perception methods ego-vehicle sensors, people tend to overlook an alternative approach leverage intelligent roadside cameras extend the ability beyond visual range. We discover that state-of-the-art vision-centric detection perform poorly cameras. This is because these mainly focus recovering depth regarding camera center, where difference between car and ground quickly shrinks while distance increases. In this paper, we...
4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, is more noisy LiDAR requires different filtering strategies that affect noise level. Comparative analyses densities levels are still lacking, mainly because available datasets use only one type difficult to compare radars same scenario. We introduce a novel large-scale multi-modal dataset...
The 3D object detection is becoming indispensable for environmental perception in autonomous driving. Light and ranging (LiDAR) point clouds often fail to distinguish objects with similar structures are quite sparse distant or small objects, thereby introducing false missed detections. To address these issues, LiDAR fused cameras due the rich textural information provided by images. However, current fusion methods suffer inefficient data representation inaccurate alignment of heterogeneous...
Cooperative perception is a promising technique for intelligent and connected vehicles through vehicle-to-everything (V2X) cooperation, provided that accurate pose information relative transforms are available. Nevertheless, obtaining precise positioning often entails high costs associated with navigation systems. Hence, it required to calibrate multi-agent cooperative perception. This letter proposes simple but effective object association approach named context-based matching (...