Canonical Factors for Hybrid Neural Fields

Robustness Feature (linguistics)
DOI: 10.48550/arxiv.2308.15461 Publication Date: 2023-01-01
ABSTRACT
Factored feature volumes offer a simple way to build more compact, efficient, and intepretable neural fields, but also introduce biases that are not necessarily beneficial for real-world data. In this work, we (1) characterize the undesirable these architectures have axis-aligned signals -- they can lead radiance field reconstruction differences of as high 2 PSNR (2) explore how learning set canonicalizing transformations improve representations by removing biases. We prove in two-dimensional model problem simultaneously together with scene appearance succeeds drastically improved efficiency. validate resulting architectures, which call TILTED, using image, signed distance, tasks, where observe improvements across quality, robustness, compactness, runtime. Results demonstrate TILTED enable capabilities comparable baselines 2x larger, while highlighting weaknesses evaluation procedures.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....