Laurent Kneip

ORCID: 0000-0001-6727-6608
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robotics and Sensor-Based Localization
  • Advanced Vision and Imaging
  • Optical measurement and interference techniques
  • 3D Surveying and Cultural Heritage
  • Advanced Image and Video Retrieval Techniques
  • Advanced Memory and Neural Computing
  • Robotic Path Planning Algorithms
  • 3D Shape Modeling and Analysis
  • Age of Information Optimization
  • Image and Object Detection Techniques
  • Indoor and Outdoor Localization Technologies
  • Electrical and Bioimpedance Tomography
  • Machine Learning and Algorithms
  • Modular Robots and Swarm Intelligence
  • Robotic Mechanisms and Dynamics
  • Human Pose and Action Recognition
  • Target Tracking and Data Fusion in Sensor Networks
  • Advanced Data Storage Technologies
  • Inertial Sensor and Navigation
  • Distributed Control Multi-Agent Systems
  • Advanced Numerical Analysis Techniques
  • Radiation Effects in Electronics
  • Neural Networks and Reservoir Computing
  • Advanced Neural Network Applications
  • Molecular Communication and Nanonetworks

ShanghaiTech University
2017-2024

Intelligent Health (United Kingdom)
2021-2023

Shanghai Institute of Microsystem and Information Technology
2020

PATH To Reading
2019

Australian National University
2013-2017

Australian Centre for Robotic Vision
2015-2016

Data61
2016

ETH Zurich
2009-2014

Friedrich-Alexander-Universität Erlangen-Nürnberg
2008

The Perspective-Three-Point (P3P) problem aims at determining the position and orientation of camera in world reference frame from three 2D-3D point correspondences. This is known to provide up four solutions that can then be disambiguated using a fourth point. All existing attempt first solve for points frame, compute which alignes two sets. In contrast, this paper we propose novel closed-form solution P3P problem, computes aligning transformation directly single stage, without intermediate...

10.1109/cvpr.2011.5995464 article EN 2011-06-01

Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, inspection. If they are further realized small scale, can also be used narrow outdoor indoor environments represent only limited risk for people. However, such operations, navigating based on global positioning system (GPS) information is not sufficient. Fully autonomous operation cities or other dense requires to fly at low altitudes, where GPS signals...

10.1109/mra.2014.2322295 article EN IEEE Robotics & Automation Magazine 2014-08-20

The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest the robotics community, as their deployability missions of surveillance and reconnaissance has now become a realistic prospect. state art, however, still lacks solutions that can work for long duration large, unknown, GPS‐denied environments. Here, we present our visual pipeline MAV state‐estimation framework, which uses feeds from monocular camera an Inertial Measurement Unit (IMU) to achieve...

10.1002/rob.21466 article EN Journal of Field Robotics 2013-08-06

This paper presents a framework for collaborative localization and mapping with multiple Micro Aerial Vehicles (MAVs) in unknown environments. Each MAV estimates its motion individually using an onboard, monocular visual odometry algorithm. The system of MAVs acts as distributed preprocessor that streams only features selected keyframes relative-pose to centralized ground station. station creates individual map each merges them together whenever it detects overlaps. allows the express their...

10.1109/iros.2013.6696923 article EN 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems 2013-11-01

The increasing demand for real-time high-precision Visual Odometry systems as part of navigation and localization tasks has recently been driving research towards more versatile scalable solutions.In this paper, we present a novel framework combining the merits inertial visual data from monocular camera to accumulate estimates local motion incrementally reliably reconstruct trajectory traversed.We demonstrate robustness efficiency our methodology in scenario with challenging dynamics,...

10.5244/c.25.16 article EN 2011-01-01

OpenGV is a new C++ library for calibrated realtime 3D geometric vision. It unifies both central and non-central absolute relative camera pose computation algorithms within single library. Each problem type comes with minimal non-minimal closed-form solvers, as well non-linear iterative optimization robust sample consensus methods. therefore contains an unprecedented level of completeness regard to vision algorithms, it the first dedicated focus on unified real-time usage multi-camera...

10.1109/icra.2014.6906582 article EN 2014-05-01

Rolling Shutter (RS) cameras are used across a wide range of consumer electronic devices-from smart-phones to high-end cameras. It is well known, that if RS camera with moving or scene, significant image distortions introduced. The quality even success structure from motion on rolling shutter images requires the usual intrinsic parameters such as focal length and distortion coefficients accurate modelling timing. current state-of-the-art technique for calibrating timings specialised...

10.1109/cvpr.2013.179 article EN 2009 IEEE Conference on Computer Vision and Pattern Recognition 2013-06-01

Event cameras have recently gained in popularity as they hold strong potential to complement regular situations of high dynamics or challenging illumination. An important problem that may benefit from the addition an event camera is given by Simultaneous Localization And Mapping (SLAM). However, order ensure progress on event-inclusive multi-sensor SLAM, novel benchmark sequences are needed. Our contribution first complete set datasets captured with a setup containing event-based stereo...

10.1109/lra.2022.3186770 article EN IEEE Robotics and Automation Letters 2022-06-28

This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While sensor specifications only provide rough estimation accuracy, present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness material) well incidence angle. Since is intended to be used for measurements tubelike environment an inspection robot, extended by investigating influence orientation dependency lighting conditions. The...

10.1109/robot.2009.5152579 article EN 2009-05-01

This paper reviews the classical problem of free-form curve registration and applies it to an efficient RGB-D visual odometry system called Canny-VO, as efficiently tracks all Canny edge features extracted from images. Two replacements for distance transformation commonly used in are proposed: approximate nearest neighbor fields oriented fields. 3-D-2-D alignment benefits these alternative formulations terms both efficiency accuracy. It removes need more computationally demanding paradigms...

10.1109/tro.2018.2875382 article EN IEEE Transactions on Robotics 2018-10-26

This paper introduces two novel solutions to the generalized-camera exterior orientation problem, which has a vast number of potential applications in robotics: (i) minimal solution requiring only three point correspondences, and (ii) gPnP, an efficient, non-iterative n-point with linear complexity points. Already existing require exhaustive algebraic derivations. In contrast, our is solved straightforward manner using Gröbner basis method. Existing are mostly based on iterative optimization...

10.1109/icra.2013.6631107 article EN 2013-05-01

We present a novel solution to compute the relative pose of generalized camera. Existing solutions are either not general, have too high computational complexity, or require many correspondences, which impedes an efficient accurate usage within Ransac schemes. factorize problem as low-dimensional, iterative optimization over rotation only, directly derived from well-known epipolar constraints. Common cameras often consist camera clusters, and give rise omni-directional landmark observations....

10.1109/cvpr.2014.64 article EN 2009 IEEE Conference on Computer Vision and Pattern Recognition 2014-06-01

Estimating the 6-DoF pose of a camera from single image relative to pre-computed 3D point-set is an important task for many computer vision applications. Perspective-n-Point (PnP) solvers are routinely used estimation, provided that good quality set 2D-3D feature correspondences known beforehand. However, finding optimal between 2D key-points and non-trivial, especially when only geometric (position) information known. Existing approaches simultaneous correspondence problem use local...

10.1109/iccv.2017.10 article EN 2017-10-01

Finding the relative pose between two calibrated views ranks among most fundamental geometric vision problems. It therefore appears as somewhat a surprise that globally optimal solver minimizes properly defined energy over non-minimal correspondence sets and in original space of transformations has yet to be discovered. This, notably, is contribution present paper. We formulate problem Quadratically Constrained Quadratic Program (QCQP), which can converted into Semidefinite (SDP) using...

10.1109/cvpr.2018.00023 article EN 2018-06-01

We present a novel real-time visual odometry framework for stereo setup of depth and high-resolution event camera. Our balances accuracy robustness against computational efficiency towards strong performance in challenging scenarios. extend conventional edge-based semi-dense time-surface maps obtained from streams. Semi-dense are generated by warping the corresponding values extrinsically calibrated The tracking module updates camera pose through efficient, geometric 3D-2D edge alignment....

10.1109/icra46639.2022.9811805 article EN 2022 International Conference on Robotics and Automation (ICRA) 2022-05-23

Vision-based localization is a cost-effective and thus attractive solution for many intelligent mobile platforms. However, its accuracy especially robustness still suffer from low illumination conditions, changes, aggressive motion. Event-based cameras are bio-inspired visual sensors that perform well in HDR conditions have high temporal resolution, provide an interesting alternative such challenging scenarios. While purely event-based solutions currently do not yet produce satisfying...

10.1109/tro.2024.3355370 article EN IEEE Transactions on Robotics 2024-01-01

This work makes use of a novel, recently proposed epipolar constraint for computing the relative pose between two calibrated images. By enforcing coplanarity plane normal vectors, it constrains three degrees freedom rotation camera views directly-independently translation. The present paper shows how approach can be extended to n points, and translated into an efficient eigenvalue minimization over rotational freedom. Each iteration in non-linear optimization has constant execution time,...

10.1109/iccv.2013.292 article EN 2013-12-01

The vast majority of modern consumer-grade cameras employ a rolling shutter mechanism. In dynamic geometric computer vision applications such as visual SLAM, the so-called effect therefore needs to be properly taken into account. A dedicated relative pose solver appears first problem solve, it is eminent importance bootstrap any derivation multi-view geometry. However, despite its significance, has received inadequate attention date. This paper presents detailed investigation geometry...

10.1109/cvpr.2016.448 preprint EN 2016-06-01

Event cameras are bio-inspired sensors that perform well in challenging illumination conditions and have high temporal resolution.However, their concept is fundamentally different from traditional frame-based cameras.The pixels of an event camera operate independently asynchronously.They measure changes the logarithmic brightness return them highly discretised form time-stamped events indicating a relative change certain quantity since last event.New models algorithms needed to process this...

10.1109/tpami.2021.3053243 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2021-01-01

We present a new solution to tracking and mapping with an event camera. The motion of the camera contains both rotation translation displacements in plane, happen arbitrarily structured environment. As result, image matching may no longer be represented by low-dimensional homographic warping, thus complicating application commonly used Image Warped Events (IWE). introduce this problem performing contrast maximization 3D. 3D location rays cast for each is smoothly varied as function...

10.3390/s22155687 article EN cc-by Sensors 2022-07-29
Coming Soon ...