- Advanced Vision and Imaging
- Computer Graphics and Visualization Techniques
- Generative Adversarial Networks and Image Synthesis
- Robotics and Sensor-Based Localization
- Industrial Vision Systems and Defect Detection
- Image Processing and 3D Reconstruction
- Advanced Image Processing Techniques
- 3D Surveying and Cultural Heritage
- Advanced Image and Video Retrieval Techniques
- Advanced Technology in Applications
- Digital Media Forensic Detection
- 3D Shape Modeling and Analysis
- Educational Technology and Assessment
- Remote Sensing and LiDAR Applications
- Advanced Clustering Algorithms Research
- EFL/ESL Teaching and Learning
- Data Analysis with R
- Microbial bioremediation and biosurfactants
- Textile materials and evaluations
Technische Universität Berlin
2024
Institut national de recherche en informatique et en automatique
2022-2023
Nanjing Tech University
2023
Inria Rennes - Bretagne Atlantique Research Centre
2022
This paper tackles the problem of novel view synthesis (NVS) from 360° images with imperfect camera poses or intrinsic parameters. We propose a end-to-end framework for training Neural Radiance Field (NeRF) models given only RGB and their rough poses, which we refer to as Omni-NeRF. extend pinhole model NeRF more general that better fits omni-directional fish-eye lenses. The approach jointly learns scene geometry optimizes parameters without knowing fisheye projection.
Neural Radiance Fields (NeRF) enable novel view synthesis of 3D scenes when trained with a set 2D images. One the key components NeRF is input encoding, i.e. mapping coordinates to higher dimensions learn high-frequency details, which has been proven increase quality. Among various mappings, hash encoding gaining increasing attention for its efficiency. However, performance on sparse inputs limited. To address this limitation, we propose new scheme that improves hash-based inputs, few and...
<div class="section abstract"><div class="htmlview paragraph">Synthetic data holds significant potential for improving the efficiency of perception tasks in autonomous driving. This paper proposes a practical synthesis pipeline that employs multi-agent reinforcement learning (MARL) to automatically generate dynamic traffic participant trajectories and leverages augmented reality (AR) processes produce photo-realistic images. AR process blends clean static background images...
<div class="section abstract"><div class="htmlview paragraph">In the field of autonomous driving, in order to guarantee - robust perception performance at night and reduce cost data collection annotation, there are many day-to-night image translation methods based on Generative Adversarial Networks (GAN) generate realistic synthetic data. The vehicle light effect is great significance task (such as detection) scene. However, no research has ever focused problem translation....
In recent years, with the popularity of LIDAR, depth cameras and other devices development intelligent robots, high-precision maps, smart cities fields, demand for outdoor scene understanding environment perception at large scales is also getting higher higher, 3D point cloud semantic segmentation technology precisely one research focuses. However, current system mainly relies on use fully labelled scenes training. It time-consuming costly to annotate 10 million or even hundreds millions...