Jonathan Eisenmann

ORCID: 0000-0003-2018-0793
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Vision and Imaging
  • Music Technology and Sound Studies
  • Computer Graphics and Visualization Techniques
  • Optical measurement and interference techniques
  • Human Motion and Animation
  • Music and Audio Processing
  • Image Enhancement Techniques
  • Image Processing Techniques and Applications
  • Video Analysis and Summarization
  • Evolutionary Algorithms and Applications
  • Advanced Optical Imaging Technologies
  • Advanced Optical Sensing Technologies
  • Innovations in Concrete and Construction Materials
  • Robotics and Sensor-Based Localization
  • Advanced Image Processing Techniques
  • Additive Manufacturing and 3D Printing Technologies
  • Design Education and Practice
  • Computational Geometry and Mesh Generation
  • Color perception and design
  • Generative Adversarial Networks and Image Synthesis
  • Algorithms and Data Compression
  • Visual Attention and Saliency Detection
  • Evacuation and Crowd Dynamics
  • Architecture, Art, Education
  • Metal Forming Simulation Techniques

Adobe Systems (United States)
2017-2023

The Ohio State University
2009-2014

General Dynamics (United States)
1989

Most current single image camera calibration methods rely on specific features or user input, and cannot be applied to natural images captured in uncontrolled settings. We propose directly inferring parameters from a using deep convolutional neural network. This network is trained automatically generated samples large-scale panorama dataset, considerably outperforms other methods, including recent learning-based approaches, terms of standard L2 error. However, we argue that many cases it...

10.1109/cvpr.2018.00250 article EN 2018-06-01

High-quality denoising of Monte Carlo low-sample renderings remains a critical challenge for practical interactive ray tracing. We present new learning-based denoiser that achieves state-of-the-art quality and runs at rates. Our model processes individual path-traced samples with lightweight neural network to extract per-pixel feature vectors. The rest our pipeline operates in pixel space. define novel pairwise affinity over the features neighborhood, from which we assemble dilated spatial...

10.1145/3450626.3459793 article EN ACM Transactions on Graphics 2021-07-19

We propose a method to extrapolate 360° field of view from single image that allows for user-controlled synthesis the out-painted content. To do so, we improvements an existing GAN-based in-painting architecture out-painting panoramic representation. Our obtains state-of-the-art results and outperforms previous methods on standard quality metrics. allow controlled out-painting, introduce novel guided co-modulation framework, which drives generation process with common pretrained...

10.1109/3dv57658.2022.00059 article EN 2021 International Conference on 3D Vision (3DV) 2022-09-01

We present a neural network that predicts HDR outdoor illumination from single LDR image. At the heart of our work is method to accurately learn lighting panoramas under any weather condition. achieve this by training another CNN (on combination synthetic and real images) take as input an panorama, regress parameters Lalonde-Matthews model. This model trained such it a) reconstructs appearance sky, b) renders objects lit illumination. use label large-scale dataset with them train image...

10.48550/arxiv.1906.04909 preprint EN other-oa arXiv (Cornell University) 2019-01-01

We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from single RGB image of an indoor scene. Unlike recent methods that leverage deep learning to perform black-box regression parameters, we propose end-to-end framework incorporates explicit geometric reasoning. In particular, design network predicts two representations scene geometry, in both the local and global reference coordinate systems, solves as rotation best aligns these predictions via...

10.48550/arxiv.1908.07070 preprint EN other-oa arXiv (Cornell University) 2019-01-01

We present a method for augmenting photo-realistic 3D scene assets by automatically recognizing, matching, and swapping their materials. Our proposes material matching pipeline the efficient replacement of unknown materials with perceptually similar PBR from database, enabling quick creation many variations given synthetic scene. At heart this is novel similarity feature that learnt, in conjunction optimal lighting conditions, fine-tuning deep neural network on classification task using our...

10.1109/cvprw56347.2022.00221 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022-06-01

Because of the diversity in lighting environments, existing illumination estimation techniques have been designed explicitly on indoor or outdoor environments. Methods focused specifically capturing accurate energy (e.g., through parametric models), which emphasizes shading and strong cast shadows; producing plausible texture with GANs), prioritizes reflections. Approaches provide editable capabilities proposed, but these tend to be simplified models, offering limited realism. In this work,...

10.1109/iccv51070.2023.00682 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

Interactive evolutionary design tools enable human intuition and creative decision-making in high-dimensional domains while leaving technical busywork to the computer. Current algorithms for interactive accept only feedback about entire candidates, not their parts, which can lead user fatigue. This article describes several case studies designers used an enhanced tool with region-of-interest character animation tasks. is called Design Evolutionary Algorithms Sensitivity (IDEAS) tool....

10.1162/leon_a_01102 article EN Leonardo 2015-06-09

Most current single image camera calibration methods rely on specific features or user input, and cannot be applied to natural images captured in uncontrolled settings. We propose directly inferring parameters from a using deep convolutional neural network. This network is trained automatically generated samples large-scale panorama dataset, considerably outperforms other methods, including recent learning-based approaches, terms of standard L2 error. However, we argue that many cases it...

10.48550/arxiv.1712.01259 preprint EN other-oa arXiv (Cornell University) 2017-01-01

This paper presents a selection method for use with interactive evolutionary algorithms and sensitivity analysis in spatiotemporal domains. Recent work the field has made it possible to give feedback an system finer granularity than typical wholesale method. recent development allows user drive search more precise way by allowing him select part of phenotype indicate fitness. The potential alleviate human fatigue bottleneck, so seems ideally suited domains that vary both space time, such as...

10.1145/2463372.2463414 article EN 2013-07-06

Most 3D reconstruction methods may only recover scene properties up to a global scale ambiguity. We present novel approach single view metrology that can the absolute of represented by heights objects or camera height above ground as well parameters orientation and field view, using just monocular image acquired in unconstrained condition. Our method relies on data-driven priors learned deep network specifically designed imbibe weakly supervised constraints from interplay unknown with...

10.48550/arxiv.2007.09529 preprint EN other-oa arXiv (Cornell University) 2020-01-01

10.2514/6.1989-1295 article EN 28th Structures, Structural Dynamics and Materials Conference 1989-04-03

Because of the diversity in lighting environments, existing illumination estimation techniques have been designed explicitly on indoor or outdoor environments. Methods focused specifically capturing accurate energy (e.g., through parametric models), which emphasizes shading and strong cast shadows; producing plausible texture with GANs), prioritizes reflections. Approaches provide editable capabilities proposed, but these tend to be simplified models, offering limited realism. In this work,...

10.48550/arxiv.2304.13207 preprint EN other-oa arXiv (Cornell University) 2023-01-01

While there have been numerous studies concerning human perception in stereoscopic environments, rules of thumb for cinematography stereoscopy not yet well-established. To that aim, we present experiments and results subject testing a environment, similar to theater (i.e. large flat screen without head-tracking). In particular wish empirically identify thresholds at which different types backgrounds, referred the computer animation industry as matte paintings, can be used while still...

10.1117/12.838980 article EN Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE 2010-02-04

In this paper we present experiments and results pertaining to the perception of depth in stereoscopic viewing synthetic imagery. computer animation, typical imagery is highly textured uses stylized illumination abstracted material models by light source models. While there have been numerous studies concerning capabilities, conventions for staging cinematography movies not yet well-established. Our long-term goal measure effectiveness various techniques on human visual system a theatrical...

10.1117/12.805994 article EN Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE 2009-01-27

This paper presents an intuitive method for novice users to interactively design custom populations of stylized, heterogeneous motion, from one input motion clip, thus allowing the user amplify existing database motions. We allow set up lattice deformers which are used by a genetic algorithm manipulate animation channels and create new variations. Our interactive evolutionary environment allows traverse available space possible motions, with gradually converges satisfactory solutions. Each...

10.5220/0002320301270134 article EN cc-by-nc-nd 2009-01-01
Coming Soon ...