- Advanced Vision and Imaging
- Computer Graphics and Visualization Techniques
- Image Enhancement Techniques
- Generative Adversarial Networks and Image Synthesis
- 3D Surveying and Cultural Heritage
- Optical measurement and interference techniques
- Color Science and Applications
- Advanced Image and Video Retrieval Techniques
- Image Processing Techniques and Applications
- 3D Shape Modeling and Analysis
- Video Surveillance and Tracking Methods
- Image Processing and 3D Reconstruction
- Remote Sensing and LiDAR Applications
- Robotics and Sensor-Based Localization
- Video Analysis and Summarization
- Face recognition and analysis
- Constraint Satisfaction and Optimization
- Distributed and Parallel Computing Systems
- Digital Media Forensic Detection
- Advanced Optical Sensing Technologies
- Scientific Computing and Data Management
- Urban Heat Island Mitigation
- Optical Polarization and Ellipsometry
- Image and Video Quality Assessment
- Visual Attention and Saliency Detection
Adobe Systems (United States)
2019-2024
Research Canada
2024
University of California, San Diego
2020
Université Laval
2013-2018
We present a convolutional neural network-based (CNN-based) technique to estimate high-dynamic range outdoor illumination from single low dynamic image. To train the CNN, we leverage large dataset of panoramas. fit low-dimensional physically-based model skies in these panoramas giving us compact set parameters (including sun position, atmospheric conditions, and camera parameters). extract limited field-of-view images panoramas, CNN with this input image–output lighting parameter pairs....
We present a method to estimate lighting from single image of an indoor scene. Previous work has used environment map representation that does not account for the localized nature lighting. Instead, we represent as set discrete 3D lights with geometric and photometric parameters. train deep neural network regress these parameters image, on dataset maps annotated depth. propose differentiable layer convert compute our loss; this bypasses challenge establishing correspondences between...
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in using fully-connected neural network. combine this with physically-based differentiable ray marching framework can render images from field under viewpoint light. demonstrate fields be estimated captured simple collocated camera-light setup, accurately model the appearance of real-world scenes complex geometry reflectance. Once estimated,...
We propose a data-driven learned sky model, which we use for outdoor lighting estimation from single image. As no large-scale dataset of images and their corresponding ground truth illumination is readily available, complementary datasets to train our approach, combining the vast diversity conditions SUN360 with radiometrically calibrated physically accurate Laval HDR database. Our key contribution provide holistic view both modeling estimation, solving problems end-to-end. From test image,...
Most current single image camera calibration methods rely on specific features or user input, and cannot be applied to natural images captured in uncontrolled settings. We propose directly inferring parameters from a using deep convolutional neural network. This network is trained automatically generated samples large-scale panorama dataset, considerably outperforms other methods, including recent learning-based approaches, terms of standard L2 error. However, we argue that many cases it...
Recent work [28], [5] has demonstrated that volumetric scene representations combined with differentiable volume rendering can enable photo-realistic for challenging scenes mesh reconstruction fails on. However, these methods entangle geometry and appearance in a "black-box" cannot be edited. Instead, we present an approach explicitly disentangles geometry—represented as continuous 3D volume—from appearance—represented 2D texture map. We achieve this by introducing 3D-to-2D mapping (or...
We present a neural network that predicts HDR outdoor illumination from single LDR image. At the heart of our work is method to accurately learn lighting panoramas under any weather condition. achieve this by training another CNN (on combination synthetic and real images) take as input an panorama, regress parameters Lalonde-Mathews model. This model trained such it a) reconstructs appearance sky, b) renders objects lit illumination. use label large-scale dataset with them train image...
This paper presents SCOOP, a new Python framework for automatically distributing dynamic task hierarchies. A hierarchy refers to tasks that can recursively spawn an arbitrary number of subtasks. The underlying computing infrastructure consists simple list resources. typical use case is run the user's main program under umbrella SCOOP module, where it becomes root any subtasks through standard "futures" API Python, and these may themselves other subsubtasks, etc. full in sense unknown until...
Authoring high-quality digital materials is key to realism in 3D rendering. Previous generative models for have been trained exclusively on synthetic data; such data limited availability and has a visual gap real materials. We circumvent this limitation by proposing PhotoMat: the first material generator photos of samples captured using cell phone camera with flash. Supervision individual maps not available setting. Instead, we train neural representation that rendered learned relighting...
Geometric camera calibration is often required for applications that understand the perspective of image. We propose Perspective Fields as a representation models local properties an contain per-pixel information about view, parameterized Up-vector and Latitude value. This has number advantages; it makes minimal assumptions model invariant or equivariant to common image editing operations like cropping, warping, rotation. It also more interpretable aligned with human perception. train neural...
We introduce SynthLight, a diffusion model for portrait relighting. Our approach frames image relighting as re-rendering problem, where pixels are transformed in response to changes environmental lighting conditions. Using physically-based rendering engine, we synthesize dataset simulate this lighting-conditioned transformation with 3D head assets under varying lighting. propose two training and inference strategies bridge the gap between synthetic real domains: (1) multi-task that takes...
Images as an artistic medium often rely on specific camera angles and lens distortions to convey ideas or emotions; however, such precise control is missing in current text-to-image models. We propose efficient general solution that allows over the when generating both photographic images. Unlike prior methods predefined shots, we solely four simple extrinsic intrinsic parameters, removing need for pre-existing geometry, reference 3D objects, multi-view data. also present a novel dataset...
We present MatSwap, a method to transfer materials designated surfaces in an image photorealistically. Such task is non-trivial due the large entanglement of material appearance, geometry, and lighting photograph. In literature, editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge 3D scene properties that are impractical obtain. contrast, we propose directly learn relationship between input -- as observed flat surface...
Most indoor 3D scene reconstruction methods focus on recovering geometry and layout. In this work, we go beyond to propose PhotoScene <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> Code: https://github.com/ViLab-UCSD/PhotoScene, a framework that takes input image(s) of along with approximately aligned CAD (either reconstructed automatically or manually specified) builds photorealistic digital twin high-quality materials similar...
Caricature, a type of exaggerated artistic portrait, amplifies the distinctive, yet nuanced traits human faces. This task is typically left to artists, as it has proven difficult capture subjects' unique characteristics well using automated methods. Recent development deep end-to-end methods achieved promising results in capturing style and higher-level exaggerations. However, key part caricatures, face warping, remained challenging for these systems. In this work, we propose AutoToon, first...
We propose a method to extrapolate 360° field of view from single image that allows for user-controlled synthesis the out-painted content. To do so, we improvements an existing GAN-based in-painting architecture out-painting panoramic representation. Our obtains state-of-the-art results and outperforms previous methods on standard quality metrics. allow controlled out-painting, introduce novel guided co-modulation framework, which drives generation process with common pretrained...
Lighting effects such as shadows or reflections are key in making synthetic images realistic and visually appealing. To generate effects, traditional computer graphics uses a physically-based renderer along with 3D geometry. compensate for the lack of geometry 2D Image compositing, recent deep learning-based approaches introduced pixel height representation to soft reflections. However, limits quality generated constrains pure specular ones. We introduce PixHt-Lab, system leveraging an...
Photometric Stereo (PS) under outdoor illumination remains a challenging, ill-posed problem due to insufficient variability in illumination. Months-long capture sessions are typically used this setup, with little success on shorter, single-day time intervals. In paper, we investigate the solution of PS over single day, different weather conditions. First, relationship between and surface reconstructability order understand when natural lighting allows existing algorithms work. Our analysis...