Daniel Gehrig

ORCID: 0000-0001-9952-3335
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Ferroelectric and Negative Capacitance Devices
  • CCD and CMOS Imaging Sensors
  • Advanced Neural Network Applications
  • Robotics and Sensor-Based Localization
  • Advanced Optical Sensing Technologies
  • EEG and Brain-Computer Interfaces
  • Atomic and Subatomic Physics Research
  • Age of Information Optimization
  • Machine Learning and ELM
  • Electronic and Structural Properties of Oxides
  • Neural Networks and Reservoir Computing
  • Neural dynamics and brain function
  • Advanced Vision and Imaging
  • Advanced MRI Techniques and Applications
  • Electrical and Bioimpedance Tomography
  • Target Tracking and Data Fusion in Sensor Networks
  • Domain Adaptation and Few-Shot Learning
  • Advanced Data Storage Technologies
  • Underwater Vehicles and Communication Systems
  • Radiation Detection and Scintillator Technologies
  • Advanced Image Processing Techniques
  • Genomics and Phylogenetic Studies
  • Microbial Natural Products and Biosynthesis
  • Microbial Community Ecology and Physiology

University of Zurich
2018-2024

ETH Zurich
1976-2022

SIB Swiss Institute of Bioinformatics
2021-2022

Event cameras are vision sensors that record asynchronous streams of per-pixel brightness changes, referred to as "events". They have appealing advantages over frame based for computer vision, including high temporal resolution, dynamic range, and no motion blur. Due the sparse, non-uniform spatio-temporal layout event signal, pattern recognition algorithms typically aggregate events into a grid-based representation subsequently process it by standard pipeline, e.g., Convolutional Neural...

10.1109/iccv.2019.00573 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2019-10-01

Natural microbial communities are phylogenetically and metabolically diverse. In addition to underexplored organismal groups1, this diversity encompasses a rich discovery potential for ecologically biotechnologically relevant enzymes biochemical compounds2,3. However, studying identify genomic pathways the synthesis of such compounds4 assigning them their respective hosts remains challenging. The biosynthetic microorganisms in open ocean largely uncharted owing limitations analysis...

10.1038/s41586-022-04862-3 article EN cc-by Nature 2022-06-22

Once an academic venture, autonomous driving has received unparalleled corporate funding in the last decade. Still, operating conditions of current cars are mostly restricted to ideal scenarios. This means that challenging illumination such as night, sunrise, and sunset remains open problem. In these cases, standard cameras being pushed their limits terms low light high dynamic range performance. To address challenges, we propose, DSEC, a new dataset contains demanding provides rich set...

10.1109/lra.2021.3068942 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2021-03-25

Event cameras are powerful new sensors able to capture high dynamic range with microsecond temporal resolution and no motion blur. Their strength is detecting brightness changes (called events) rather than capturing direct images; however, algorithms can be used convert events into usable image representations for applications such as classification. Previous works rely on hand-crafted spatial smoothing techniques reconstruct images from events. State-of-the-art video reconstruction has...

10.1109/wacv45572.2020.9093366 article EN 2020-03-01

Event cameras are novel sensors that output brightness changes in the form of a stream asynchronous "events" instead intensity frames. They offer significant advantages with respect to conventional cameras: high dynamic range (HDR), temporal resolution, and no motion blur. Recently, learning approaches operating on event data have achieved impressive results. Yet, these methods require large amount for training, which is hardly available due novelty computer vision research. In this paper,...

10.1109/cvpr42600.2020.00364 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020-06-01

State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts types that can modeled, leading to errors highly dynamic scenarios. Event cameras are novel sensors address limitation providing auxiliary visual information blind-time between frames. They asynchronously measure per-pixel...

10.1109/cvpr46437.2021.01589 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

The best performing learning algorithms devised for event cameras work by first converting events into dense representations that are then processed using standard CNNs. However, these steps discard both the sparsity and high temporal resolution of events, leading to computational burden latency. For this reason, recent works have adopted Graph Neural Networks (GNNs), which process as "static" spatio-temporal graphs, inherently "sparse". We take trend one step further introducing...

10.1109/cvpr52688.2022.01205 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Recently, video frame interpolation using a combination of frame- and event-based cameras has surpassed traditional image-based methods both in terms performance memory efficiency. However, current still suffer from (i) brittle image-level fusion complementary results, that fails the presence artifacts fused image, (ii) potentially temporally inconsistent inefficient motion estimation procedures, run for every inserted (iii) low contrast regions do not trigger events, thus cause events-only...

10.1109/cvpr52688.2022.01723 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Abstract The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative sensors. Event measure the changes intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth latency requirements 1 . Despite these advantages, event-camera-based are either...

10.1038/s41586-024-07409-w article EN cc-by Nature 2024-05-29

Event cameras are novel vision sensors that report per-pixel brightness changes as a stream of asynchronous “events”. They offer significant advantages compared to standard due their high temporal resolution, dynamic range and lack motion blur. However, events only measure the varying component visual signal, which limits ability encode scene context. By contrast, absolute intensity frames, capture much richer representation scene. Both thus complementary. nature events, combining them with...

10.1109/lra.2021.3060707 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2021-02-20

We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras. Modern frame-based methods heavily rely on matching costs computed correlation. In contrast, there exists no method for cameras that explicitly computes costs. Instead, learning-based approaches using events usually resort the U-Net architecture estimate sparsely. Our key finding is introduction of features significantly improves results compared previous solely...

10.1109/3dv53792.2021.00030 article EN 2021 International Conference on 3D Vision (3DV) 2021-12-01

Event cameras are novel sensors that output brightness changes in the form of a stream asynchronous ”events” instead intensity frames. Compared to conventional image sensors, they offer significant advantages: high temporal resolution, dynamic range, no motion blur, and much lower bandwidth. Recently, learning-based approaches have been applied event-based data, thus unlocking their potential making progress variety tasks, such as monocular depth prediction. Most existing use standard...

10.1109/3dv50981.2020.00063 article EN 2021 International Conference on 3D Vision (3DV) 2020-11-01

Reliable perception during fast motion maneuvers or in high dynamic range environments is crucial for robotic systems. Since event cameras are robust to these challenging conditions, they have great potential increase the reliability of robot vision. However, event-based vision has been held back by shortage labeled datasets due novelty cameras. To overcome this drawback, we propose a task transfer method train models directly with images and unlabeled data. Compared previous approaches, (i)...

10.1109/lra.2022.3145053 article EN IEEE Robotics and Automation Letters 2022-01-25

Due to their resilience motion blur and high robustness in low-light dynamic range conditions, event cameras are poised become enabling sensors for vision-based exploration on future Mars helicopter missions. However, existing event-based visual-inertial odometry (VIO) algorithms either suffer from tracking errors or brittle, since they cannot cope with significant depth uncertainties caused by an unforeseen loss of other effects. In this work, we introduce EKLT-VIO, which addresses both...

10.1109/lra.2022.3187826 article EN IEEE Robotics and Automation Letters 2022-07-01

Modern high dynamic range (HDR) imaging pipelines align and fuse multiple low (LDR) images captured at different exposure times. While these methods work well in static scenes, scenes remain a challenge since the LDR still suffer from saturation noise. In such scenarios, event cameras would be valid complement, thanks to their higher temporal resolution range. this paper, we propose first multi-bracket HDR pipeline combining standard camera with an camera. Our results show better overall...

10.1109/cvprw56347.2022.00070 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022-06-01

10.1109/cvprw63382.2024.00579 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2024-06-17

We propose a generic event camera calibration framework using image reconstruction. Instead of relying on blinking LED patterns or external screens, we show that neural-network–based reconstruction is well suited for the task intrinsic and extrinsic cameras. The advantage our proposed approach can use standard do not rely active illumination. Furthermore, enables possibility to perform between frame-based event-based sensors without additional complexity. Both simulation real-world...

10.1109/cvprw53098.2021.00155 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021-06-01

Today, state-of-the-art deep neural networks that process events first convert them into dense, grid-like input representations before using an off-the-shelf network. However, selecting the appropriate representation for task traditionally requires training a network each and best one based on validation score, which is very time-consuming. This work eliminates this bottleneck by Gromov-Wasserstein Discrepancy (GWD) between raw their representation. It about 200 times faster to compute than...

10.1109/iccv51070.2023.01180 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

State-of-the-art machine-learning methods for event cameras treat events as dense representations and process them with conventional deep neural networks. Thus, they fail to maintain the sparsity asynchronous nature of data, thereby imposing significant computation latency constraints on downstream systems. A recent line work tackles this issue by modeling spatiotemporally evolving graphs that can be efficiently asynchronously processed using graph These works showed impressive reductions,...

10.48550/arxiv.2211.12324 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Quadrupedal robots are conquering various applications in indoor and outdoor environments due to their capability navigate challenging uneven terrains. Exteroceptive information greatly enhances this since perceiving surroundings allows them adapt controller thus achieve higher levels of robustness. However, sensors such as LiDARs RGB cameras do not provide sufficient quickly precisely react a highly dynamic environment they suffer from bandwidth-latency trade-off. They require significant...

10.1109/icra48891.2023.10161392 article EN 2023-05-29
Coming Soon ...