Viktor Rudnev

ORCID: 0000-0002-8608-8394
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Computer Graphics and Visualization Techniques
  • Advanced Neural Network Applications
  • Advanced Vision and Imaging
  • Advanced Data Storage Technologies
  • Ferroelectric and Negative Capacitance Devices
  • 3D Shape Modeling and Analysis
  • Advanced MRI Techniques and Applications
  • Video Surveillance and Tracking Methods
  • User Authentication and Security Systems
  • Functional Brain Connectivity Studies
  • Atomic and Subatomic Physics Research
  • Remote Sensing and LiDAR Applications
  • Electrical and Bioimpedance Tomography
  • Digital and Cyber Forensics
  • Generative Adversarial Networks and Image Synthesis
  • Robot Manipulation and Learning
  • Scientific Computing and Data Management
  • Image Enhancement Techniques
  • Radiation Detection and Scintillator Technologies

Saarland University
2023-2024

Max Planck Institute for Informatics
2021-2023

Max Planck Society
2020-2021

We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under controllable poses. Our is developed upon recent neural scene representation rendering works which learn representations geometry appearance only 2D images. While existing demonstrated compelling static scenes playback dynamic scenes, photo-realistic reconstruction with implicit methods, in particular user-controlled novel poses, still difficult. To address this problem, we...

10.1145/3478513.3480528 article EN ACM Transactions on Graphics 2021-12-01

Asynchronously operating event cameras find many applications due to their high dynamic range, vanishingly low motion blur, latency and data bandwidth. The field saw remarkable progress during the last few years, existing event-based 3D reconstruction approaches recover sparse point clouds of scene. However, such sparsity is a limiting factor in cases, especially computer vision graphics, that has not been addressed satisfactorily so far. Accordingly, this paper proposes first approach for...

10.1109/cvpr52729.2023.00483 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under controllable poses. Our is built upon recent neural scene representation rendering works which learn representations geometry appearance only 2D images. While existing demonstrated compelling static scenes playback dynamic scenes, photo-realistic reconstruction with implicit methods, in particular user-controlled novel poses, still difficult. To address this problem, we utilize...

10.48550/arxiv.2106.02019 preprint EN other-oa arXiv (Cornell University) 2021-01-01

3D hand pose estimation from monocular videos is a long-standing and challenging problem, which now seeing strong upturn. In this work, we address it for the first time using single event camera, i.e., an asynchronous vision sensor reacting on brightness changes. Our EventHands approach has characteristics previously not demonstrated with RGB or depth camera such as high temporal resolution at low data throughputs real-time performance 1000 Hz. Due to different modality of cameras compared...

10.1109/iccv48922.2021.01216 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

Novel view synthesis techniques predominantly utilize RGB cameras, inheriting their limitations such as the need for sufficient lighting, susceptibility to motion blur, and restricted dynamic range. In contrast, event cameras are significantly more resilient these but have been less explored in this domain, particularly large-scale settings. Current methodologies primarily focus on front-facing or object-oriented (360-degree view) scenarios. For first time, we introduce 3D Gaussians...

10.48550/arxiv.2502.10827 preprint EN arXiv (Cornell University) 2025-02-15

3D hand tracking from a monocular video is very challenging problem due to interactions, occlusions, left-right ambiguity, and fast motion. Most existing methods rely on RGB inputs, which have severe limitations under low-light conditions suffer motion blur. In contrast, event cameras capture local brightness changes instead of full image frames do not the described effects. Unfortunately, image-based techniques cannot be directly applied events significant differences in data modalities....

10.1109/3dv62453.2024.00008 article EN 2021 International Conference on 3D Vision (3DV) 2024-03-18

Volumetric reconstruction of dynamic scenes is an important problem in computer vision. It especially challenging poor lighting and with fast motion. partly due to the limitations RGB cameras: To capture motion without much blur, framerate must be increased, which turn requires more lighting. In contrast, event cameras, record changes pixel brightness asynchronously, are less dependent on lighting, making them suitable for recording We hence propose first method spatiotemporally reconstruct...

10.48550/arxiv.2412.06770 preprint EN arXiv (Cornell University) 2024-12-09

3D hand pose estimation from monocular videos is a long-standing and challenging problem, which now seeing strong upturn. In this work, we address it for the first time using single event camera, i.e., an asynchronous vision sensor reacting on brightness changes. Our EventHands approach has characteristics previously not demonstrated with RGB or depth camera such as high temporal resolution at low data throughputs real-time performance 1000 Hz. Due to different modality of cameras compared...

10.48550/arxiv.2012.06475 preprint EN other-oa arXiv (Cornell University) 2020-01-01

3D hand tracking from a monocular video is very challenging problem due to interactions, occlusions, left-right ambiguity, and fast motion. Most existing methods rely on RGB inputs, which have severe limitations under low-light conditions suffer motion blur. In contrast, event cameras capture local brightness changes instead of full image frames do not the described effects. Unfortunately, image-based techniques cannot be directly applied events significant differences in data modalities....

10.48550/arxiv.2312.14157 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Asynchronously operating event cameras find many applications due to their high dynamic range, vanishingly low motion blur, latency and data bandwidth. The field saw remarkable progress during the last few years, existing event-based 3D reconstruction approaches recover sparse point clouds of scene. However, such sparsity is a limiting factor in cases, especially computer vision graphics, that has not been addressed satisfactorily so far. Accordingly, this paper proposes first approach for...

10.48550/arxiv.2206.11896 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Photorealistic editing of outdoor scenes from photographs requires a profound understanding the image formation process and an accurate estimation scene geometry, reflectance illumination. A delicate manipulation lighting can then be performed while keeping albedo geometry unaltered. We present NeRF-OSR, i.e., first approach for relighting based on neural radiance fields. In contrast to prior art, our technique allows simultaneous both illumination camera viewpoint using only collection...

10.48550/arxiv.2112.05140 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...