- Robotics and Sensor-Based Localization
- 3D Surveying and Cultural Heritage
- Advanced Vision and Imaging
- Advanced Optical Sensing Technologies
- Optical measurement and interference techniques
- Advanced Neural Network Applications
- Inertial Sensor and Navigation
- Indoor and Outdoor Localization Technologies
- Video Surveillance and Tracking Methods
- Robotic Path Planning Algorithms
- Muscle activation and electromyography studies
- Soft Robotics and Applications
- Infrared Target Detection Methodologies
- Remote Sensing and LiDAR Applications
- Blind Source Separation Techniques
- Electromagnetic Compatibility and Noise Suppression
- Advanced Sensor and Energy Harvesting Materials
- Satellite Image Processing and Photogrammetry
- Plant and animal studies
- Remote-Sensing Image Classification
- Electrostatic Discharge in Electronics
- ECG Monitoring and Analysis
- Face recognition and analysis
- Optical Imaging and Spectroscopy Techniques
- Hand Gesture Recognition Systems
Zhejiang University
2011-2025
Qiqihar University
2025
State Key Laboratory of Industrial Control Technology
2023
Tianjin University
2021-2022
Guangdong University of Technology
2013
Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust accurate 6DOF pose estimation holds great potential in robotics beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration LIC-Fusion 2.0), which introduces novel plane-feature tracking for efficiently processing 3D point clouds. particular, after motion...
Sensor calibration is the fundamental block for a multi-sensor fusion system. This paper presents an accurate and repeatable LiDAR-IMU method (termed LI-Calib), to calibrate 6-DOF extrinsic transformation between 3D LiDAR Inertial Measurement Unit (IMU). Regarding high data capture rate IMU sensors, LI-Calib adopts continuous-time trajectory formulation based on B-Spline, which more suitable fusing high-rate or asynchronous measurements than discrete-time approaches. Additionally, decomposes...
Accurate and reliable sensor calibration is essential to fuse LiDAR inertial measurements, which are usually available in robotic applications. In this article, we propose a novel LiDAR-IMU method within the continuous-time batch-optimization framework, where intrinsics of both sensors spatial-temporal extrinsics between calibrated without using infrastructure, such as fiducial tags. Compared discrete-time approaches, formulation has natural advantages for fusing high-rate measurements from...
Localization and mapping with heterogeneous multisensor fusion have been prevalent in recent years. To adequately fuse multimodal sensor measurements received at different time instants frequencies, we estimate the continuous-time trajectory by fixed-lag smoothing within a factor-graph optimization framework. With formulation, can query poses any corresponding to measurements. bound computation complexity of smoother, maintain temporal keyframe sliding windows constant size,...
Abstract Recently, approaches have been put forward that focus on the recognition of mesh semantic meanings. These methods usually need prior knowledge learned from training dataset, but when size dataset is small, or meshes are too complex, segmentation performance will be greatly effected. This paper introduces an approach to and labeling which incorporates imparted by both segmented, labeled meshes, unsegmented, unlabeled meshes. A Conditional Random Fields (CRF) based objective function...
The applications of myoelectrical interfaces are majorly limited by the efficacy decoding motion intent in electromyographic (EMG) signal. Currently, EMG classification methods often rely substantially on handcrafted features or ignore key channel and interfeature information for tasks. To address these issues, a multiscale feature extraction network (MSFEnet) based channel-spatial attention is proposed to decode signal task gesture recognition classification. Specifically, we fuse...
In this paper, we propose a highly accurate continuous-time trajectory estimation framework dedicated to SLAM (Simultaneous Localization and Mapping) applications, which enables fuse high-frequency asynchronous sensor data effectively. We apply the proposed in 3D LiDAR-inertial system for evaluations. The method adopts non-rigid registration simultaneously removing motion distortion LiDAR scans. Additionally, two-state correction efficiently tackle computationally-intractable global...
In this paper, we propose an efficient continuous-time LiDAR-Inertial-Camera Odometry, utilizing non-uniform B-splines to tightly couple measurements from the LiDAR, IMU, and camera. contrast uniform B-spline-based methods, our B-spline approach offers significant advantages in terms of achieving real-time efficiency high accuracy. This is accomplished by dynamically adaptively placing control points, taking into account varying dynamics motion. To enable fusion heterogeneous data within a...
In this letter, we propose a probabilistic continuous-time visual-inertial odometry (VIO) for rolling shutter cameras. The trajectory formulation naturally facilitates the fusion of asynchronized high-frequency IMU data and motion-distorted images. To prevent intractable computation load, proposed VIO is sliding-window keyframe-based. We to probabilistically marginalize control points keep constant number keyframes in sliding window. Furthermore, line exposure time difference (line delay)...
3D LiDAR-based single object tracking (SOT) has gained increasing attention as it plays a crucial role in applications such autonomous driving. The central problem is how to learn target-aware representation from the sparse and incomplete point clouds. In this paper, we propose novel Correlation Pyramid Network (CorpNet) with unified encoder motion-factorized decoder. Specifically, introduces multi-level self attentions cross its main branch enrich template search region features realize...
This paper proposes a real-time, versatile Simultaneous Localization and Mapping (SLAM) object localization system, which fuses measurements from LiDAR, camera, Inertial Measurement Unit (IMU), Global Positioning System (GPS). Our system can locate itself in an unknown environment build scene map based on we also track obtain the global location of objects interest. Precisely, our SLAM subsystem consists following four parts: LiDAR-inertial odometry, Visual-inertial GPS-inertial pose graph...
We present a real-time LiDAR-Inertial-Camera SLAM system with 3D Gaussian Splatting as the mapping backend. Leveraging robust pose estimates from our odometry, Coco-LIC, an incremental photo-realistic is proposed in this paper. initialize Gaussians colorized LiDAR points and optimize them using differentiable rendering powered by Splatting. Meticulously designed strategies are employed to incrementally expand map adaptively control its density, ensuring high-quality capability. Experiments...
In recent years, with the breakthroughs in sensor technology, SLAM technology is developing towards high speed and dynamic applications. The rotating multi line LiDAR plays an important role. However, sensors need to restructure data environment. Our work propose a correction method based on IMU hardware synchronization, make synchronization unit. This can still output correct point cloud information when moving violently.
The electromagnetic interference effects in the μA741 operational amplifier circuit were investigated this paper. Through using of pulse injection, output characteristics simulated by considering EMI effects. sensitivity and smoke analysis, simulation results shown that differential input stage intermediate amplifying are most sensitive vulnerable function model circuits to
Event cameras have garnered considerable attention due to their advantages over traditional in low power consumption, high dynamic range, and no motion blur. This paper proposes a monocular event-inertial odometry incorporating an adaptive decay kernel-based time surface with polarity-aware tracking. We utilize decay-based Time Surface extract texture information from asynchronous events, which adapts the characteristics of event stream enhances representation environmental textures....
Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust accurate 6DOF pose estimation holds great potential in robotics beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration LIC-Fusion 2.0), which introduces novel plane-feature tracking for efficiently processing 3D point clouds. particular, after motion...
Sensor calibration is the fundamental block for a multi-sensor fusion system. This paper presents an accurate and repeatable LiDAR-IMU method (termed LI-Calib), to calibrate 6-DOF extrinsic transformation between 3D LiDAR Inertial Measurement Unit (IMU). % Regarding high data capture rate IMU sensors, LI-Calib adopts continuous-time trajectory formulation based on B-Spline, which more suitable fusing high-rate or asynchronous measurements than discrete-time approaches. Additionally,...
3D LiDAR-based single object tracking (SOT) has gained increasing attention as it plays a crucial role in applications such autonomous driving. The central problem is how to learn target-aware representation from the sparse and incomplete point clouds. In this paper, we propose novel Correlation Pyramid Network (CorpNet) with unified encoder motion-factorized decoder. Specifically, introduces multi-level self attentions cross its main branch enrich template search region features realize...
This paper proposes a LiDAR-Inertial SLAM with efficiently extracted planes, which couples explicit planes in the odometry to improve accuracy and mapping for consistency. The proposed method consists of three parts: an efficient Point <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$\boldsymbol{\rightarrow\text{Line}\rightarrow \text{Plane}}$</tex> extraction algorithm, LiDAR-Inertial-Plane tightly coupled odometry, global plane-aided mapping....