- Advanced Vision and Imaging
- Computer Graphics and Visualization Techniques
- Radiation Dose and Imaging
- Advanced X-ray and CT Imaging
- 3D Shape Modeling and Analysis
- Image Enhancement Techniques
- Digital Radiography and Breast Imaging
- Astronomical Observations and Instrumentation
- Advanced Optical Imaging Technologies
- Calibration and Measurement Techniques
- Advanced Radiotherapy Techniques
- Image Processing Techniques and Applications
- Graphite, nuclear technology, radiation studies
- Remote Sensing and LiDAR Applications
- Image and Signal Denoising Methods
- Advanced Image Processing Techniques
- Medical Imaging Techniques and Applications
- Advanced X-ray Imaging Techniques
- Architecture and Computational Design
- Augmented Reality Applications
- Virtual Reality Applications and Impacts
- Image and Video Stabilization
- Sensor Technology and Measurement Systems
- Multimodal Machine Learning Applications
- Computational Geometry and Mesh Generation
Google (United States)
2018-2024
Columbia University
2006-2011
TÜV Nord (Germany)
1986-1989
We present a novel approach to view synthesis using multiplane images (MPIs). Building on recent advances in learned gradient descent, our algorithm generates an MPI from set of sparse camera viewpoints. The resulting method incorporates occlusion reasoning, improving performance challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. show that achieves high-quality, state-of-the-art results two datasets: the...
We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with collection of spherical shells that are better suited representing panoramic content. further process data to reduce large number shell layers small, fixed RGBA+depth without significant loss in visual...
We present a system for acquiring, processing, and rendering panoramic light field still photography display in Virtual Reality (VR). acquire spherical datasets with two novel camera rigs designed portable efficient acquisition. introduce real-time reconstruction algorithm that uses per-view geometry disk-based blending field. also demonstrate how to use prefiltering operation project from high-quality offline model into our while suppressing artifacts. practical approach compressing fields...
Effects such as depth of field, area lighting, antialiasing and global illumination require evaluating a complex high-dimensional integral at each pixel an image. We develop new adaptive rendering algorithm that greatly reduces the number samples needed for Monte Carlo integration. Our method renders directly into image-space wavelet basis. First, we adaptively distribute to reduce variance basis' scale coefficients, while using coefficients find edges. Working in wavelets, rather than...
The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systemsnsuch as Light Stage. Recent advances are pushing free viewpoint relightable videos dynamic human performances closer to photorealistic quality. However, despite significant efforts, these sophisticated systems limited by reconstruction rendering algorithms which do not fully model complex structures higher order light transport effects such global...
Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes real-time rendering system that enables interactive edits of BRDFs, as rendered their final placement on objects static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows diffuse shading soft shadows) are using precomputation-based...
Efficiently calculating accurate soft shadows cast by area light sources remains a difficult problem. Ray tracing based approaches are subject to noise or banding, and most other methods either scale poorly with scene geometry place restrictions on and/or source size shape. Beam is one solution which has historically been considered too slow complicated for practical rendering applications. tracing's performance hindered complex intersection tests, lack of good acceleration structures...
In this paper, we explore large ray packet algorithms for acceleration structure traversal and frustum culling in the context of Whitted tracing, examine how these methods respond to varying size, scene complexity, recursion complexity. We offer a new algorithm which is robust degrading coherence method generating bounds around reflection refraction packets. compare, adjust, finally compose most effective into real-time tracer. With aid multi-core CPU technology, our system renders complex...
Abstract We describe a new technique for coherent out‐of‐core point‐based global illumination and ambient occlusion. Point‐based (PBGI) is used in production to render tremendously complex scenes, so in‐core storage of point octree data structures quickly becomes problem. However, simple extension classical top‐down building algorithm would be extremely inefficient due large amount I/O required. Our method extends previous PBGI algorithms with an that uses minimal stores on disk compactly...
Effects such as depth of field, area lighting, antialiasing and global illumination require evaluating a complex high-dimensional integral at each pixel an image. We develop new adaptive rendering algorithm that greatly reduces the number samples needed for Monte Carlo integration. Our method renders directly into image-space wavelet basis. First, we adaptively distribute to reduce variance basis' scale coefficients, while using coefficients find edges. Working in wavelets, rather than...
Abstract Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect also extremely costly. It can take hundreds to thousands samples achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field algorithm that achieves significantly fewer samples. Our consists two main phases: sampling and image reconstruction. In phase, sample density determined by a ‘blur‐size’ map...
Light Fields let us experience freedom of motion and realistic reflections translucence like never before in VR. Explore the Gamble House, Mosaic Tile Space Shuttle Discovery. These navigable light field stills showcase emerging technology Google is using to power its next generation VR content.
Light fields can provide transportive immersive experiences with a level of realism unsurpassed by any other imaging technology. Within limited viewing volume, light accurately reproduce stereo parallax, motion reflections, refractions, and volumetric effects for real-world scenes. While have been explored in computer graphics since the mid-90's [Gortier et al. 1996; Levoy Hanrahan 1996], practical systems recording, processing, delivering high quality field remained out reach.
Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes real-time rendering system that enables interactive edits of BRDFs, as rendered their final placement on objects static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows diffuse shading soft shadows) are using precomputation-based...
This Immersive Pavilion installation introduces our new system for capturing, reconstructing, compressing, and rendering light field video content. By leveraging DeepView, a recently introduced view synthesis algorithm, can reconstruct challenging scenes with view-dependent reflections, semi-transparent surfaces, near-field objects as close 34 cm to the surface of 46 camera capture rig. Improving upon past systems that required specialized storage graphics hardware playback, compressed...
course Capture4VR: from VR photography to video Share on Authors: Christian Richardt University of Bath BathView Profile , Peter Hedman UCL UCLView Ryan S. Overbeck Google GoogleView Brian Cabral Facebook FacebookView Robert Konrad Stanford UniversityView Steve Sullivan Microsoft MicrosoftView Authors Info & Affiliations SIGGRAPH '19: ACM 2019 CoursesJuly Article No.: 4Pages 1–319https://doi.org/10.1145/3305366.3328028Published:28 July 6citation313DownloadsMetricsTotal Citations6Total...
We present a portable multi-camera system for recording panoramic light field video content. The proposed captures wide baseline (0.8 meters), high resolution (>15 pixels per degree), large of view (>220°) fields at 30 frames second. array contains 47 time-synchronized cameras distributed on the surface hemispherical, 0.92 meter diameter plastic dome. use commercially available action sports (Yi 4k) mounted inside dome using 3D printed brackets. dome, mounts, triggering hardware and are...
Precomputed radiance transfer (PRT) enables all-frequency relighting with complex illumination, materials and shadows. To achieve real-time performance, PRT exploits angular coherence in the spatial light transport. Temporal of lighting from frame to is an important, but unexplored additional form for PRT. In this paper, we develop incremental methods approximating differences between consecutive frames. We analyze wavelet decomposition over typical motion sequences, observe differing...
We present a novel neural algorithm for performing high-quality, highresolution, real-time view synthesis. From sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders views at 1080p resolution 30fps on an NVIDIA A100. Our feed-forward generalizes across wide variety datasets scenes produces state-of-the-art quality method. approaches, in some cases surpasses, top offline methods. In order to achieve these results we use combination several...
We present a novel neural algorithm for performing high-quality, high-resolution, real-time view synthesis. From sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders views at 1080p resolution 30fps on an NVIDIA A100. Our feed-forward generalizes across wide variety datasets scenes produces state-of-the-art quality method. approaches, in some cases surpasses, top offline methods. In order to achieve these results we use combination several...