- Advanced Vision and Imaging
- Computer Graphics and Visualization Techniques
- Advanced Optical Imaging Technologies
- Video Coding and Compression Technologies
- Optical measurement and interference techniques
- Engineering Applied Research
- Image Processing Techniques and Applications
- Image and Video Quality Assessment
- Robotics and Sensor-Based Localization
- Advanced Data Compression Techniques
- Innovations in Concrete and Construction Materials
- Satellite Image Processing and Photogrammetry
- Smart Materials for Construction
- Fluid Dynamics Simulations and Interactions
- Bluetooth and Wireless Communication Technologies
- GABA and Rice Research
- Metal Forming Simulation Techniques
- Advanced Image and Video Retrieval Techniques
- Structural Analysis and Optimization
- Advanced Image Processing Techniques
- 3D Shape Modeling and Analysis
- Infrared Target Detection Methodologies
- Remote Sensing and LiDAR Applications
- Advanced optical system design
- Parallel Computing and Optimization Techniques
Université Libre de Bruxelles
2021-2024
University Library in Bratislava
2021
The University of Tokyo
2002
In this paper we propose a solution for view synthesis of scenes presenting highly non-Lambertian objects.While Image-Based Rendering methods can easily render diffuse materials given only their depth, objects present non-linear displacements features, characterized by curved lines in epipolar plane images.Hence, to replace the depth maps used rendering new viewpoints more complex "non-Lambertian map" describing light field's behavior.In 4D field, features are linearly displaced following...
Abstract To represent immersive media providing six degree‐of‐freedom experience, moving picture experts group (MPEG) video (MIV) was developed to compress multiview videos. Meanwhile, the state‐of‐the‐art versatile coding (VVC) also supports multilayer (ML) functionality, enabling of In this study, we designed experimental conditions assess performance these two standards in terms objective and subjective quality. We observe that their performances are highly dependent on input source, such...
We present a novel methodology to precisely calibrate the subaperture views of an array plenoptic 2.0 cameras. Such cameras consist micro lens array, and image captured through them is lenslet that can be converted dense set pinhole views, so-called images. This cam-era provides several multiview images at some sparse points 3D space. To find relative position those simply using structure-from-motion creates misalignments due small disparities within each set. Additionally, traditional...
Multi-focused Plenoptic cameras (Plenoptic 2.0) allow the acquisition of Light-Field a scene. However, extracting novel view from resulting Micro-Lens Array (MLA) image poses several challenges: micro-lenses calibration, noise reduction, patch size (depth) estimation to convert micro-lens multi-view images. We propose method easily find important parameters, avoid unreliable luminance area, estimate depth map, and extract sub-aperture images (multiview) for single- multi-focused 2.0 camera....
The plenoptic 2.0 camera is a light field acquisition system consisting of main lens and micro-lens array (MLA) at non-focal distance the lens. While it allows to retrieve geometry scene, distances between lens, MLA sensor are usually unknown. Therefore, use cases for cameras stay limited while they have more potential applications such as virtual reality, provided that their parameters precisely known. In this paper, we present pattern-free calibration method camera's intrinsic from...
In the context of development MPEG-I standard for immersive video compression ISO/IEC 23090-12 (MIV), need handling scenes with non-Lambertian materials arose. This class material is omnipresent in natural scenes, but violates all assumptions on which depth image-based rendering (DIBR) based. this paper, we present a view-synthesizer designed to handle objects DIBR, replacing classical maps by multi-coefficients maps. We report results exploration experiments Future MIV test method against...
DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain maps: through software or sensors, which is a trade-off between precision versus speed (computational cost processing time). This article compares the performance of maps estimated by MPEG-I's Depth Estimation Reference Software that acquired Kinect Azure. We use IV-PSNR evaluate their maps-based for objective...
A tensor display is a type of 3D light field display, composed multiple transparent screens and back-light that can render scene with correct depth, allowing to view without wearing glasses. The analysis state-of-the-art displays assumes the content Lambertian. In order extend its capabilities, we analyze limitations displaying non-Lambertian scenes propose new method factorize using disparity analysis. Moreover, demonstrate prototype three layers full HD at 60 fps. Compared...
This paper describes requirements for a user interface tele-machining system with operational environment transmission capability. The necessity of transmitting information concerning the requires method transformation multi-axis force to visual and auditory information. Methods predictive display geometrical compensate time delays are also described. For tactile presentation, an eccentric weight was used generate vibration, whose frequency controlled according index which represents...
This paper describes a tele-machining and tele-handling system which enables multiple operators to machine handle an object in remote environment. Engineers will be able work cooperatively using the system, even if they are located remotely. The can also used for tele-education. key technologies predictive display of geometrical information, real-time machining surface multi-axis force auditory information presentation as well tactile state. Implementation results experiments discussed.
3D layered displays are composed of a backlight and multiple LCD panels. To reproduce scene with such displays, light field is given as input to optimize the layers' images that displayed by each panel. Current works layers using parallel rays (orthographic camera model) or do not take into account distance user during optimization. In this paper, we present use perspective model layer's optimization guidelines for parameters based on functioning these displays. By more realistic...
Neural Radiance Fields (NeRF) have attracted particular attention due to their exceptional capability in virtual view generation from a sparse set of input images. However, scope is constrained by the substantial amount images required for training. This work introduces data augmentation methodology train NeRF using external depth information. The approach entails generating new at different positions through utilization MPEG's reference synthesizer (RVS) augment training image pool NeRF....
Tensor displays are screens able to render a light field with correct depth perception without wearing glasses. Such devices have already been shown be accurately scene composed of Lambertian objects. This paper presents the model and prototyping tensor display three layers, using repurposed computer monitors, extends factorization method non-Lambertian Furthermore, we examine relation limitations between depth-of-field range scenes. Non-Lambertian scenes contain out-of-range disparities...