LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis
Image-based lighting
Representation
Shader
DOI:
10.48550/arxiv.2304.03280
Publication Date:
2023-01-01
AUTHORS (8)
ABSTRACT
Neural fields have recently enjoyed great success in representing and rendering 3D scenes. However, most state-of-the-art implicit representations model static or dynamic scenes as a whole, with minor variations. Existing work on learning disentangled world object neural do not consider the problem of composing objects into different lighting-aware manner. We present Lighting-Aware Field (LANe) for compositional synthesis driving physically consistent Specifically, we learn scene representation that disentangles background transient elements world-NeRF class-specific object-NeRFs to allow multiple scene. Furthermore, explicitly designed both models handle lighting variation, which allows us compose spatially varying lighting. This is achieved by constructing light field using it conjunction learned shader modulate appearance NeRFs. demonstrate performance our synthetic dataset diverse conditions rendered CARLA simulator, well novel real-world cars collected at times day. Our approach shows outperforms challenging setup, via from one an entirely whilst still respecting variations For more results, please visit project website https://lane-composition.github.io/.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....