Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
Light Field
DOI:
10.48550/arxiv.2307.06143
Publication Date:
2023-01-01
AUTHORS (3)
ABSTRACT
Light field is a type of image data that captures the 3D scene information by recording light rays emitted from at various orientations. It offers more immersive perception than classic 2D images but cost huge volume. In this paper, we draw inspiration visual characteristics Sub-Aperture Images (SAIs) and design compact neural network representation for compression task. The backbone takes randomly initialized noise as input supervised on SAIs target field. composed two types complementary kernels: descriptive kernels (descriptors) store description learned during training, modulatory (modulators) control rendering different queried perspectives. To further enhance compactness meanwhile retain high quality decoded field, accordingly introduce modulator allocation kernel tensor decomposition mechanisms, followed non-uniform quantization lossless entropy coding techniques, to finally form an efficient pipeline. Extensive experiments demonstrate our method outperforms other state-of-the-art (SOTA) methods significant margin in Moreover, after aligning descriptors, modulators one can be transferred new fields dense views, indicating potential solution view synthesis
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....