Yannick Hold-Geoffroy

ORCID: 0000-0002-1060-6941
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Vision and Imaging
  • Computer Graphics and Visualization Techniques
  • Image Enhancement Techniques
  • Generative Adversarial Networks and Image Synthesis
  • 3D Surveying and Cultural Heritage
  • Optical measurement and interference techniques
  • Color Science and Applications
  • Advanced Image and Video Retrieval Techniques
  • Image Processing Techniques and Applications
  • 3D Shape Modeling and Analysis
  • Video Surveillance and Tracking Methods
  • Image Processing and 3D Reconstruction
  • Remote Sensing and LiDAR Applications
  • Robotics and Sensor-Based Localization
  • Video Analysis and Summarization
  • Face recognition and analysis
  • Constraint Satisfaction and Optimization
  • Distributed and Parallel Computing Systems
  • Digital Media Forensic Detection
  • Advanced Optical Sensing Technologies
  • Scientific Computing and Data Management
  • Urban Heat Island Mitigation
  • Optical Polarization and Ellipsometry
  • Image and Video Quality Assessment
  • Visual Attention and Saliency Detection

Adobe Systems (United States)
2019-2024

Research Canada
2024

University of California, San Diego
2020

Université Laval
2013-2018

We present a convolutional neural network-based (CNN-based) technique to estimate high-dynamic range outdoor illumination from single low dynamic image. To train the CNN, we leverage large dataset of panoramas. fit low-dimensional physically-based model skies in these panoramas giving us compact set parameters (including sun position, atmospheric conditions, and camera parameters). extract limited field-of-view images panoramas, CNN with this input image–output lighting parameter pairs....

10.1109/cvpr.2017.255 article EN 2017-07-01

We present a method to estimate lighting from single image of an indoor scene. Previous work has used environment map representation that does not account for the localized nature lighting. Instead, we represent as set discrete 3D lights with geometric and photometric parameters. train deep neural network regress these parameters image, on dataset maps annotated depth. propose differentiable layer convert compute our loss; this bypasses challenge establishing correspondences between...

10.1109/iccv.2019.00727 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2019-10-01

We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in using fully-connected neural network. combine this with physically-based differentiable ray marching framework can render images from field under viewpoint light. demonstrate fields be estimated captured simple collocated camera-light setup, accurately model the appearance of real-world scenes complex geometry reflectance. Once estimated,...

10.48550/arxiv.2008.03824 preprint EN cc-by arXiv (Cornell University) 2020-01-01

We propose a data-driven learned sky model, which we use for outdoor lighting estimation from single image. As no large-scale dataset of images and their corresponding ground truth illumination is readily available, complementary datasets to train our approach, combining the vast diversity conditions SUN360 with radiometrically calibrated physically accurate Laval HDR database. Our key contribution provide holistic view both modeling estimation, solving problems end-to-end. From test image,...

10.1109/cvpr.2019.00709 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-06-01

Most current single image camera calibration methods rely on specific features or user input, and cannot be applied to natural images captured in uncontrolled settings. We propose directly inferring parameters from a using deep convolutional neural network. This network is trained automatically generated samples large-scale panorama dataset, considerably outperforms other methods, including recent learning-based approaches, terms of standard L2 error. However, we argue that many cases it...

10.1109/cvpr.2018.00250 article EN 2018-06-01

Recent work [28], [5] has demonstrated that volumetric scene representations combined with differentiable volume rendering can enable photo-realistic for challenging scenes mesh reconstruction fails on. However, these methods entangle geometry and appearance in a "black-box" cannot be edited. Instead, we present an approach explicitly disentangles geometry—represented as continuous 3D volume—from appearance—represented 2D texture map. We achieve this by introducing 3D-to-2D mapping (or...

10.1109/cvpr46437.2021.00704 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

We present a neural network that predicts HDR outdoor illumination from single LDR image. At the heart of our work is method to accurately learn lighting panoramas under any weather condition. achieve this by training another CNN (on combination synthetic and real images) take as input an panorama, regress parameters Lalonde-Mathews model. This model trained such it a) reconstructs appearance sky, b) renders objects lit illumination. use label large-scale dataset with them train image...

10.1109/cvpr.2019.01040 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-06-01

This paper presents SCOOP, a new Python framework for automatically distributing dynamic task hierarchies. A hierarchy refers to tasks that can recursively spawn an arbitrary number of subtasks. The underlying computing infrastructure consists simple list resources. typical use case is run the user's main program under umbrella SCOOP module, where it becomes root any subtasks through standard "futures" API Python, and these may themselves other subsubtasks, etc. full in sense unknown until...

10.1145/2616498.2616565 article EN 2014-07-11

Authoring high-quality digital materials is key to realism in 3D rendering. Previous generative models for have been trained exclusively on synthetic data; such data limited availability and has a visual gap real materials. We circumvent this limitation by proposing PhotoMat: the first material generator photos of samples captured using cell phone camera with flash. Supervision individual maps not available setting. Instead, we train neural representation that rendered learned relighting...

10.1145/3588432.3591535 preprint EN 2023-07-19

Geometric camera calibration is often required for applications that understand the perspective of image. We propose Perspective Fields as a representation models local properties an contain per-pixel information about view, parameterized Up-vector and Latitude value. This has number advantages; it makes minimal assumptions model invariant or equivariant to common image editing operations like cropping, warping, rotation. It also more interpretable aligned with human perception. train neural...

10.1109/cvpr52729.2023.01660 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

We introduce SynthLight, a diffusion model for portrait relighting. Our approach frames image relighting as re-rendering problem, where pixels are transformed in response to changes environmental lighting conditions. Using physically-based rendering engine, we synthesize dataset simulate this lighting-conditioned transformation with 3D head assets under varying lighting. propose two training and inference strategies bridge the gap between synthetic real domains: (1) multi-task that takes...

10.48550/arxiv.2501.09756 preprint EN arXiv (Cornell University) 2025-01-16

Images as an artistic medium often rely on specific camera angles and lens distortions to convey ideas or emotions; however, such precise control is missing in current text-to-image models. We propose efficient general solution that allows over the when generating both photographic images. Unlike prior methods predefined shots, we solely four simple extrinsic intrinsic parameters, removing need for pre-existing geometry, reference 3D objects, multi-view data. also present a novel dataset...

10.48550/arxiv.2501.12910 preprint EN arXiv (Cornell University) 2025-01-22

We present MatSwap, a method to transfer materials designated surfaces in an image photorealistically. Such task is non-trivial due the large entanglement of material appearance, geometry, and lighting photograph. In literature, editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge 3D scene properties that are impractical obtain. contrast, we propose directly learn relationship between input -- as observed flat surface...

10.48550/arxiv.2502.07784 preprint EN arXiv (Cornell University) 2025-02-11

Most indoor 3D scene reconstruction methods focus on recovering geometry and layout. In this work, we go beyond to propose PhotoScene <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> Code: https://github.com/ViLab-UCSD/PhotoScene, a framework that takes input image(s) of along with approximately aligned CAD (either reconstructed automatically or manually specified) builds photorealistic digital twin high-quality materials similar...

10.1109/cvpr52688.2022.01801 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Caricature, a type of exaggerated artistic portrait, amplifies the distinctive, yet nuanced traits human faces. This task is typically left to artists, as it has proven difficult capture subjects' unique characteristics well using automated methods. Recent development deep end-to-end methods achieved promising results in capturing style and higher-level exaggerations. However, key part caricatures, face warping, remained challenging for these systems. In this work, we propose AutoToon, first...

10.1109/wacv45572.2020.9093543 article EN 2020-03-01

We propose a method to extrapolate 360° field of view from single image that allows for user-controlled synthesis the out-painted content. To do so, we improvements an existing GAN-based in-painting architecture out-painting panoramic representation. Our obtains state-of-the-art results and outperforms previous methods on standard quality metrics. allow controlled out-painting, introduce novel guided co-modulation framework, which drives generation process with common pretrained...

10.1109/3dv57658.2022.00059 article EN 2021 International Conference on 3D Vision (3DV) 2022-09-01

Lighting effects such as shadows or reflections are key in making synthetic images realistic and visually appealing. To generate effects, traditional computer graphics uses a physically-based renderer along with 3D geometry. compensate for the lack of geometry 2D Image compositing, recent deep learning-based approaches introduced pixel height representation to soft reflections. However, limits quality generated constrains pure specular ones. We introduce PixHt-Lab, system leveraging an...

10.1109/cvpr52729.2023.01597 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

10.1109/cvpr52733.2024.00894 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024-06-16

Photometric Stereo (PS) under outdoor illumination remains a challenging, ill-posed problem due to insufficient variability in illumination. Months-long capture sessions are typically used this setup, with little success on shorter, single-day time intervals. In paper, we investigate the solution of PS over single day, different weather conditions. First, relationship between and surface reconstructability order understand when natural lighting allows existing algorithms work. Our analysis...

10.1109/tpami.2019.2962693 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2019-12-27
Coming Soon ...