Jun Li

ORCID: 0000-0002-0437-5112
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Autonomous Vehicle Technology and Safety
  • Advanced Neural Network Applications
  • Network Packet Processing and Optimization
  • Video Surveillance and Tracking Methods
  • Network Security and Intrusion Detection
  • Remote Sensing and LiDAR Applications
  • Robotics and Sensor-Based Localization
  • Advanced Image and Video Retrieval Techniques
  • Internet Traffic Analysis and Secure E-voting
  • Industrial Vision Systems and Defect Detection
  • Robotic Path Planning Algorithms
  • Algorithms and Data Compression
  • Advanced Optical Sensing Technologies
  • Traffic control and management
  • Traffic and Road Safety
  • Visual Attention and Saliency Detection
  • 3D Surveying and Cultural Heritage
  • Human-Automation Interaction and Safety
  • Vehicle Dynamics and Control Systems
  • Traffic Prediction and Management Techniques
  • Advanced Vision and Imaging
  • 3D Shape Modeling and Analysis
  • Infrastructure Maintenance and Monitoring
  • Industrial Technology and Control Systems
  • Wildlife-Road Interactions and Conservation

Tsinghua University
2016-2025

State Key Laboratory of Vehicle NVH and Safety Technology
2024

China University of Geosciences
2023

Chengdu University
2022

Sun Yat-sen University
2022

University of Oxford
2013

Xinyang College of Agriculture and Forestry
2011

Jiujiang University
2009

Dalian University of Technology
2009

Zhejiang Wanli University
2007

Autonomous vehicles require constant environmental perception to obtain the distribution of obstacles achieve safe driving. Specifically, 3D object detection is a vital functional module as it can simultaneously predict surrounding objects' categories, locations, and sizes. Generally, autonomous are equipped with multiple sensors, including cameras LiDARs. The fact that single-modal methods suffer from unsatisfactory performance motivates utilizing modalities inputs compensate for single...

10.1109/tiv.2023.3264658 article EN IEEE Transactions on Intelligent Vehicles 2023-04-05

Abstract Defined traffic laws must be respected by all vehicles when driving on the road, including self-driving without human drivers. Nevertheless, ambiguity of human-oriented laws, particularly compliance thresholds, poses a significant challenge to implementation regulations vehicles, especially in detecting illegal behaviors. To address these challenges, here we present trigger-based hierarchical online monitor for self-assessment behavior, which aims improve rationality and real-time...

10.1038/s41467-024-44694-5 article EN cc-by Nature Communications 2024-01-09

3D Multi-object tracking (MOT) ensures consistency during continuous dynamic detection, conducive to subsequent motion planning and navigation tasks in autonomous driving. However, camera-based methods suffer the case of occlusions it can be challenging track irregular objects for LiDAR-based accurately. Some fusion work well but do not consider untrustworthy issue appearance features under occlusion. At same time, false detection problem also significantly affects tracking. As such, we...

10.1109/tits.2023.3285651 article EN IEEE Transactions on Intelligent Transportation Systems 2023-06-27

While most recent autonomous driving system focuses on developing perception methods ego-vehicle sensors, people tend to overlook an alternative approach leverage intelligent roadside cameras extend the ability beyond visual range. We discover that state-of-the-art vision-centric bird's eye view detection have inferior performances cameras. This is because these mainly focus recovering depth regarding camera center, where difference between car and ground quickly shrinks while distance...

10.1109/cvpr52729.2023.02070 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

In this work, we introduce FARP-Net, an adaptive local-global feature aggregation and relation-aware proposal network for high-quality 3D object detection from pure point clouds. Our key insight is that learning irregular yet sparse cloud generating superb proposals are both pivotal detection. Technically, propose a novel layer (LGFAL) fully exploits the complementary correlation between local features global features, fuses their strengths adaptively via attention-based fusion module....

10.1109/tmm.2023.3275366 article EN IEEE Transactions on Multimedia 2023-05-11

Autonomous driving is considered one of the revolutionary technologies shaping humanity’s future mobility and quality life. However, safety remains a critical hurdle in way commercialization widespread deployment autonomous vehicles on public roads. Safety concerns require system to handle uncertainties from multiple sources that are either preexisting, e.g., stochastic behavior traffic participants or scenario occlusion, introduced as result processing, application neural networks. Thus, it...

10.1109/tits.2023.3270887 article EN IEEE Transactions on Intelligent Transportation Systems 2023-05-11

Amphibious ground-aerial vehicles fuse flying and driving modes to enable more flexible air-land mobility have received growing attention recently. By analyzing the existing amphibious vehicles, we highlight autonomous fly-driving functionality for effective uses of in complex three-dimensional urban transportation systems. We review summarize key enabling technologies intelligent flying-driving vehicle designs, identify major technological barriers propose potential solutions future...

10.1109/tiv.2022.3193418 article EN IEEE Transactions on Intelligent Vehicles 2022-07-25

Multi-modal fusion overcomes the inherent limitations of single-sensor perception in 3D object detection autonomous driving. The 4D Radar and LiDAR can boost range more robust. Nevertheless, different data characteristics noise distributions between two sensors hinder performance improvement when directly integrating them. Therefore, we are first to propose a novel method termed <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML"...

10.1109/tvt.2022.3230265 article EN IEEE Transactions on Vehicular Technology 2022-12-19

Semi-supervised learning (SSL) has promising potential for improving model performance using both labelled and unlabelled data. Since recovering 3D information from 2D images is an ill-posed problem, the current state-of-the-art methods of monocular object detection (Mono3D) have relatively low precision recall, making semi-supervised Mono3D tasks challenging understudied. In this work, we propose a unified effective framework called Mix-Teaching that can be applied to most detectors. Based...

10.1109/tcsvt.2023.3270728 article EN IEEE Transactions on Circuits and Systems for Video Technology 2023-04-26

Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' attention in recent years. The current multi-modal methods mainly focus on camera data and LiDAR data, but pay little to the kinematic information provided by sensors vehicle, such as acceleration, vehicle speed, angle rotation. These are not affected complex external scenes, so it more robust reliable. In this article, we introduce existing application fields research progress...

10.1109/tiv.2023.3268051 article EN IEEE Transactions on Intelligent Vehicles 2023-04-18

Cooperative perception is challenging for safety-critical autonomous driving applications. The errors in the shared position and pose cause an inaccurate relative transform estimation disrupt robust mapping of Ego vehicle. We propose a distributed object-level cooperative system called OptiMatch, which detected 3D bounding boxes local state information are between connected vehicles. To correct noisy transform, measurements both vehicles (bounding boxes) utilized, optimal transport...

10.1109/iv55152.2023.10186727 article EN 2022 IEEE Intelligent Vehicles Symposium (IV) 2023-06-04

Motion prediction is essential for safe and efficient autonomous driving. However, the inexplicability uncertainty of complex artificial intelligence models may lead to unpredictable failures motion module, which mislead system make unsafe decisions. Therefore, it necessary develop methods guarantee reliable driving, where failure detection a potential direction. Uncertainty estimates can be used quantify degree confidence model has in its predictions valuable detection. We propose framework...

10.1109/icra48891.2023.10160596 article EN 2023-05-29

In the field of autonomous driving, relying solely on environmental information captured by single vehicle's LiDAR often falls short perception requirements in complex scenarios. Adopting a vehicle-infrastructure cooperation approach can effectively address limitations. Therefore, development an algorithm that register heterogeneous point clouds is crucial. This paper proposes cooperative cloud registration network (TEA-IHPCR) based temporal augmented and rotation invariance. By capturing...

10.1109/tim.2024.3522410 article EN IEEE Transactions on Instrumentation and Measurement 2025-01-01

10.1109/tcsvt.2025.3541313 article EN IEEE Transactions on Circuits and Systems for Video Technology 2025-01-01

While most recent autonomous driving system focuses on developing perception methods ego-vehicle sensors, people tend to overlook an alternative approach leverage intelligent roadside cameras extend the ability beyond visual range. We discover that state-of-the-art vision-centric detection perform poorly cameras. This is because these mainly focus recovering depth regarding camera center, where difference between car and ground quickly shrinks while distance increases. In this paper, we...

10.1109/tpami.2025.3549711 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2025-01-01

4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, is more noisy LiDAR requires different filtering strategies that affect noise level. Comparative analyses densities levels are still lacking, mainly because available datasets use only one type difficult to compare radars same scenario. We introduce a novel large-scale multi-modal dataset...

10.1038/s41597-025-04698-2 article EN cc-by-nc-nd Scientific Data 2025-03-13

The 3D object detection is becoming indispensable for environmental perception in autonomous driving. Light and ranging (LiDAR) point clouds often fail to distinguish objects with similar structures are quite sparse distant or small objects, thereby introducing false missed detections. To address these issues, LiDAR fused cameras due the rich textural information provided by images. However, current fusion methods suffer inefficient data representation inaccurate alignment of heterogeneous...

10.1109/tim.2022.3224525 article EN IEEE Transactions on Instrumentation and Measurement 2022-11-24

Cooperative perception is a promising technique for intelligent and connected vehicles through vehicle-to-everything (V2X) cooperation, provided that accurate pose information relative transforms are available. Nevertheless, obtaining precise positioning often entails high costs associated with navigation systems. Hence, it required to calibrate multi-agent cooperative perception. This letter proposes simple but effective object association approach named context-based matching (...

10.1109/lra.2024.3374168 article EN IEEE Robotics and Automation Letters 2024-03-06
Coming Soon ...