Delei Kong

ORCID: 0000-0002-5681-587X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robotics and Sensor-Based Localization
  • Advanced Memory and Neural Computing
  • Advanced Image and Video Retrieval Techniques
  • CCD and CMOS Imaging Sensors
  • Indoor and Outdoor Localization Technologies
  • Digital Image Processing Techniques
  • Neural Networks and Reservoir Computing
  • Advanced Vision and Imaging
  • Distributed Control Multi-Agent Systems
  • Modular Robots and Swarm Intelligence
  • UAV Applications and Optimization
  • Advanced Materials and Mechanics
  • Video Surveillance and Tracking Methods
  • Cell Image Analysis Techniques
  • Robotic Path Planning Algorithms
  • Model Reduction and Neural Networks
  • Advanced Neural Network Applications
  • Simulation Techniques and Applications
  • Infrared Target Detection Methodologies
  • Visual Attention and Saliency Detection
  • Parallel Computing and Optimization Techniques
  • Neural dynamics and brain function

Hunan University
2024-2025

Northeastern University
2022-2024

Event cameras offer promising properties, such as high temporal resolution and dynamic range. These benefits have been utilized into many machine vision tasks, especially optical flow estimation. Currently, most existing event-based works use deep learning to estimate flow. However, their networks not fully exploited prior hidden states motion flows. Additionally, supervision strategy has leveraged the geometric constraints of event data unlock potential networks. In this paper, we propose...

10.1109/tim.2024.3365160 article EN IEEE Transactions on Instrumentation and Measurement 2024-01-01

10.1109/tase.2025.3547338 article EN IEEE Transactions on Automation Science and Engineering 2025-01-01

Traditional visual place recognition (VPR), usually using standard cameras, is easy to fail due glare or high-speed motion. By contrast, event cameras have the advantages of low latency, high temporal resolution, and dynamic range, which can deal with above issues. Nevertheless, are prone failure in motionless scenes, while still provide appearance information this case. Thus, exploiting complementarity effectively improve performance VPR algorithms. In paper, we propose FE-Fusion-VPR, an...

10.1109/lra.2023.3268850 article EN IEEE Robotics and Automation Letters 2023-04-20

Traditional visual place recognition (VPR) methods generally use frame-based cameras, which will easily fail due to rapid illumination changes or fast motion. To overcome this, we propose an end-to-end VPR network using event can achieve good performance in challenging environments (e.g., large-scale driving scenes). The key idea of the proposed algorithm is first characterize streams with EST voxel grid representation, then extract features a deep residual network, and, finally, aggregate...

10.1109/tim.2022.3168892 article EN IEEE Transactions on Instrumentation and Measurement 2022-01-01

Traditional visual navigation methods of micro aerial vehicle (MAV) usually calculate a passable path that satisfies the constraints depending on prior map. However, these have issues such as high demand for computing resources and poor robustness in face unfamiliar environments. Aiming to solve above problems, we propose neuromorphic reinforcement learning method (Neuro-Planner) combines spiking neural network (SNN) deep (DRL) realize MAV 3D with depth camera. Specifically, design actor...

10.1109/tvt.2023.3278097 article EN IEEE Transactions on Vehicular Technology 2023-09-25

Event cameras have been successfully applied to visual place recognition (VPR) tasks by using deep artificial neural networks (ANNs) in recent years. However, previously proposed ANN architectures are often unable harness the abundant temporal information presented event streams. In contrast, spiking exhibit more intricate spatiotemporal dynamics and inherently well-suited process sparse asynchronous Unfortunately, directly inputting temporal-dense volumes into network introduces excessive...

10.48550/arxiv.2402.10476 preprint EN arXiv (Cornell University) 2024-02-16

The event camera is a new type of visual sensor inspired by the biological retina. It can efficiently capture brightness changes scene (called events) in real-time and output sparse asynchronous stream with microsecond resolution. Moreover, it has low latency, bandwidth, Advantages such as high speed, High Dynamic Range (HDR). In this article, we propose novel camera-based target detection distance estimation method. First, use DVS to obtain stable clear cumulative image; then, because...

10.1109/iccc54389.2021.9674426 article EN 2021 7th International Conference on Computer and Communications (ICCC) 2021-12-10

Event cameras have the potential to revolutionize field of robot vision, particularly in areas like stereo disparity estimation, owing their high temporal resolution and dynamic range. Many studies use deep learning for event camera estimation. However, these methods fail fully exploit information stream acquire clear representations. Additionally, there is room further reduction pixel shifts feature maps before constructing cost volume. In this paper, we propose EV-MGDispNet, a novel...

10.48550/arxiv.2408.05452 preprint EN arXiv (Cornell University) 2024-08-10

Sparse and asynchronous sensing processing in natural organisms lead to ultra low-latency energy-efficient perception. Event cameras, known as neuromorphic vision sensors, are designed mimic these characteristics. However, fully utilizing the sparse event stream remains challenging. Influenced by mature algorithms of standard most existing event-based still rely on "group events" paradigm (e.g., frames, 3D voxels) when handling streams. This encounters issues such feature loss, stacking,...

10.48550/arxiv.2410.10601 preprint EN arXiv (Cornell University) 2024-10-14

Recovering the camera motion and scene geometry from visual data is a fundamental problem in field of computer vision. Its success standard vision attributed to maturity feature extraction, association multi-view geometry. The recent emergence neuromorphic event-based cameras places great demands on approaches that use raw event as input solve this problem. Existing state-of-the-art solutions typically infer implicitly by iteratively reversing generation process. However, nonlinear nature...

10.48550/arxiv.2407.12239 preprint EN arXiv (Cornell University) 2024-07-16

Traditional visual place recognition (VPR) methods generally use frame-based cameras, which is easy to fail due dramatic illumination changes or fast motions. In this paper, we propose an end-to-end network for event can achieve good performance in challenging environments. The key idea of the proposed algorithm firstly characterize streams with EST voxel grid, then extract features using a convolution network, and finally aggregate improved VLAD realize streams. To verify effectiveness...

10.48550/arxiv.2011.03290 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Traditional visual navigation methods of micro aerial vehicle (MAV) usually calculate a passable path that satisfies the constraints depending on prior map. However, these have issues such as high demand for computing resources and poor robustness in face unfamiliar environments. Aiming to solve above problems, we propose neuromorphic reinforcement learning method (Neuro-Planner) combines spiking neural network (SNN) deep (DRL) realize MAV 3D with depth camera. Specifically, design actor...

10.48550/arxiv.2210.02305 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Event cameras offer promising properties, such as high temporal resolution and dynamic range. These benefits have been utilized into many machine vision tasks, especially optical flow estimation. Currently, most existing event-based works use deep learning to estimate flow. However, their networks not fully exploited prior hidden states motion flows. Additionally, supervision strategy has leveraged the geometric constraints of event data unlock potential networks. In this paper, we propose...

10.48550/arxiv.2305.07853 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Traditional visual place recognition (VPR), usually using standard cameras, is easy to fail due glare or high-speed motion. By contrast, event cameras have the advantages of low latency, high temporal resolution, and dynamic range, which can deal with above issues. Nevertheless, are prone failure in weakly textured motionless scenes, while still provide appearance information this case. Thus, exploiting complementarity effectively improve performance VPR algorithms. In paper, we propose...

10.48550/arxiv.2211.12244 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...