EchoScene: Indoor Scene Generation via Information Echo over Scene Graph Diffusion

Echo (communications protocol)
DOI: 10.48550/arxiv.2405.00915 Publication Date: 2024-05-01
ABSTRACT
We present EchoScene, an interactive and controllable generative model that generates 3D indoor scenes on scene graphs. EchoScene leverages a dual-branch diffusion dynamically adapts to Existing methods struggle handle graphs due varying numbers of nodes, multiple edge combinations, manipulator-induced node-edge operations. overcomes this by associating each node with denoising process enables collaborative information exchange, enhancing consistent generation aware global constraints. This is achieved through echo scheme in both shape layout branches. At every step, all processes share their data exchange unit combines these updates using graph convolution. The ensures the are influenced holistic understanding graph, facilitating globally coherent scenes. resulting can be manipulated during inference editing input sampling noise model. Extensive experiments validate our approach, which maintains controllability surpasses previous fidelity. Moreover, generated high quality thus directly compatible off-the-shelf texture generation. Code trained models open-sourced.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....