Ryan Overbeck

ORCID: 0009-0004-8485-1837
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Vision and Imaging
  • Computer Graphics and Visualization Techniques
  • Radiation Dose and Imaging
  • Advanced X-ray and CT Imaging
  • 3D Shape Modeling and Analysis
  • Image Enhancement Techniques
  • Digital Radiography and Breast Imaging
  • Astronomical Observations and Instrumentation
  • Advanced Optical Imaging Technologies
  • Calibration and Measurement Techniques
  • Advanced Radiotherapy Techniques
  • Image Processing Techniques and Applications
  • Graphite, nuclear technology, radiation studies
  • Remote Sensing and LiDAR Applications
  • Image and Signal Denoising Methods
  • Advanced Image Processing Techniques
  • Medical Imaging Techniques and Applications
  • Advanced X-ray Imaging Techniques
  • Architecture and Computational Design
  • Augmented Reality Applications
  • Virtual Reality Applications and Impacts
  • Image and Video Stabilization
  • Sensor Technology and Measurement Systems
  • Multimodal Machine Learning Applications
  • Computational Geometry and Mesh Generation

Google (United States)
2018-2024

Columbia University
2006-2011

TÜV Nord (Germany)
1986-1989

We present a novel approach to view synthesis using multiplane images (MPIs). Building on recent advances in learned gradient descent, our algorithm generates an MPI from set of sparse camera viewpoints. The resulting method incorporates occlusion reasoning, improving performance challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. show that achieves high-quality, state-of-the-art results two datasets: the...

10.1109/cvpr.2019.00247 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-06-01

We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with collection of spherical shells that are better suited representing panoramic content. further process data to reduce large number shell layers small, fixed RGBA+depth without significant loss in visual...

10.1145/3386569.3392485 article EN ACM Transactions on Graphics 2020-08-12

We present a system for acquiring, processing, and rendering panoramic light field still photography display in Virtual Reality (VR). acquire spherical datasets with two novel camera rigs designed portable efficient acquisition. introduce real-time reconstruction algorithm that uses per-view geometry disk-based blending field. also demonstrate how to use prefiltering operation project from high-quality offline model into our while suppressing artifacts. practical approach compressing fields...

10.1145/3272127.3275031 article EN ACM Transactions on Graphics 2018-11-28

Effects such as depth of field, area lighting, antialiasing and global illumination require evaluating a complex high-dimensional integral at each pixel an image. We develop new adaptive rendering algorithm that greatly reduces the number samples needed for Monte Carlo integration. Our method renders directly into image-space wavelet basis. First, we adaptively distribute to reduce variance basis' scale coefficients, while using coefficients find edges. Working in wavelets, rather than...

10.1145/1618452.1618486 article EN ACM Transactions on Graphics 2009-12-01

The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systemsnsuch as Light Stage. Recent advances are pushing free viewpoint relightable videos dynamic human performances closer to photorealistic quality. However, despite significant efforts, these sophisticated systems limited by reconstruction rendering algorithms which do not fully model complex structures higher order light transport effects such global...

10.1145/3414685.3417814 article EN ACM Transactions on Graphics 2020-11-27

Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes real-time rendering system that enables interactive edits of BRDFs, as rendered their final placement on objects static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows diffuse shading soft shadows) are using precomputation-based...

10.1145/1141911.1141979 article EN ACM Transactions on Graphics 2006-07-01

Efficiently calculating accurate soft shadows cast by area light sources remains a difficult problem. Ray tracing based approaches are subject to noise or banding, and most other methods either scale poorly with scene geometry place restrictions on and/or source size shape. Beam is one solution which has historically been considered too slow complicated for practical rendering applications. tracing's performance hindered complex intersection tests, lack of good acceleration structures...

10.5555/2383847.2383861 article EN Eurographics Symposium on Rendering Techniques 2007-06-25

In this paper, we explore large ray packet algorithms for acceleration structure traversal and frustum culling in the context of Whitted tracing, examine how these methods respond to varying size, scene complexity, recursion complexity. We offer a new algorithm which is robust degrading coherence method generating bounds around reflection refraction packets. compare, adjust, finally compose most effective into real-time tracer. With aid multi-core CPU technology, our system renders complex...

10.1109/rt.2008.4634619 article EN 2008-08-01

Abstract We describe a new technique for coherent out‐of‐core point‐based global illumination and ambient occlusion. Point‐based (PBGI) is used in production to render tremendously complex scenes, so in‐core storage of point octree data structures quickly becomes problem. However, simple extension classical top‐down building algorithm would be extremely inefficient due large amount I/O required. Our method extends previous PBGI algorithms with an that uses minimal stores on disk compactly...

10.1111/j.1467-8659.2011.01995.x article EN Computer Graphics Forum 2011-06-01

Effects such as depth of field, area lighting, antialiasing and global illumination require evaluating a complex high-dimensional integral at each pixel an image. We develop new adaptive rendering algorithm that greatly reduces the number samples needed for Monte Carlo integration. Our method renders directly into image-space wavelet basis. First, we adaptively distribute to reduce variance basis' scale coefficients, while using coefficients find edges. Working in wavelets, rather than...

10.1145/1661412.1618486 article EN 2009-12-01

Abstract Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect also extremely costly. It can take hundreds to thousands samples achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field algorithm that achieves significantly fewer samples. Our consists two main phases: sampling and image reconstruction. In phase, sample density determined by a ‘blur‐size’ map...

10.1111/j.1467-8659.2011.01854.x article EN Computer Graphics Forum 2011-03-21

Light Fields let us experience freedom of motion and realistic reflections translucence like never before in VR. Explore the Gamble House, Mosaic Tile Space Shuttle Discovery. These navigable light field stills showcase emerging technology Google is using to power its next generation VR content.

10.1145/3226552.3226557 article EN 2018-08-06

Light fields can provide transportive immersive experiences with a level of realism unsurpassed by any other imaging technology. Within limited viewing volume, light accurately reproduce stereo parallax, motion reflections, refractions, and volumetric effects for real-world scenes. While have been explored in computer graphics since the mid-90's [Gortier et al. 1996; Levoy Hanrahan 1996], practical systems recording, processing, delivering high quality field remained out reach.

10.1145/3214745.3214811 article EN 2018-08-08

Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes real-time rendering system that enables interactive edits of BRDFs, as rendered their final placement on objects static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows diffuse shading soft shadows) are using precomputation-based...

10.1145/1179352.1141979 article EN 2006-01-01

This Immersive Pavilion installation introduces our new system for capturing, reconstructing, compressing, and rendering light field video content. By leveraging DeepView, a recently introduced view synthesis algorithm, can reconstruct challenging scenes with view-dependent reflections, semi-transparent surfaces, near-field objects as close 34 cm to the surface of 46 camera capture rig. Improving upon past systems that required specialized storage graphics hardware playback, compressed...

10.1145/3388536.3407878 article EN 2020-08-15

course Capture4VR: from VR photography to video Share on Authors: Christian Richardt University of Bath BathView Profile , Peter Hedman UCL UCLView Ryan S. Overbeck Google GoogleView Brian Cabral Facebook FacebookView Robert Konrad Stanford UniversityView Steve Sullivan Microsoft MicrosoftView Authors Info & Affiliations SIGGRAPH '19: ACM 2019 CoursesJuly Article No.: 4Pages 1–319https://doi.org/10.1145/3305366.3328028Published:28 July 6citation313DownloadsMetricsTotal Citations6Total...

10.1145/3305366.3328028 article EN 2019-07-28

We present a portable multi-camera system for recording panoramic light field video content. The proposed captures wide baseline (0.8 meters), high resolution (>15 pixels per degree), large of view (>220°) fields at 30 frames second. array contains 47 time-synchronized cameras distributed on the surface hemispherical, 0.92 meter diameter plastic dome. use commercially available action sports (Yi 4k) mounted inside dome using 3D printed brackets. dome, mounts, triggering hardware and are...

10.1145/3355056.3364593 article EN 2019-11-17

Precomputed radiance transfer (PRT) enables all-frequency relighting with complex illumination, materials and shadows. To achieve real-time performance, PRT exploits angular coherence in the spatial light transport. Temporal of lighting from frame to is an important, but unexplored additional form for PRT. In this paper, we develop incremental methods approximating differences between consecutive frames. We analyze wavelet decomposition over typical motion sequences, observe differing...

10.5555/2383894.2383914 article EN Eurographics Symposium on Rendering Techniques 2006-06-26

We present a novel neural algorithm for performing high-quality, highresolution, real-time view synthesis. From sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders views at 1080p resolution 30fps on an NVIDIA A100. Our feed-forward generalizes across wide variety datasets scenes produces state-of-the-art quality method. approaches, in some cases surpasses, top offline methods. In order to achieve these results we use combination several...

10.1145/3687953 article EN other-oa ACM Transactions on Graphics 2024-11-19

We present a novel neural algorithm for performing high-quality, high-resolution, real-time view synthesis. From sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders views at 1080p resolution 30fps on an NVIDIA A100. Our feed-forward generalizes across wide variety datasets scenes produces state-of-the-art quality method. approaches, in some cases surpasses, top offline methods. In order to achieve these results we use combination several...

10.48550/arxiv.2411.16680 preprint EN arXiv (Cornell University) 2024-11-25
Coming Soon ...