LRM: Large Reconstruction Model for Single Image to 3D
Image stitching
3d model
Generative model
DOI:
10.48550/arxiv.2311.04400
Publication Date:
2023-01-01
AUTHORS (10)
ABSTRACT
We propose the first Large Reconstruction Model (LRM) that predicts 3D model of an object from a single input image within just 5 seconds. In contrast to many previous methods are trained on small-scale datasets such as ShapeNet in category-specific fashion, LRM adopts highly scalable transformer-based architecture with 500 million learnable parameters directly predict neural radiance field (NeRF) image. train our end-to-end manner massive multi-view data containing around 1 objects, including both synthetic renderings Objaverse and real captures MVImgNet. This combination high-capacity large-scale training empowers be generalizable produce high-quality reconstructions various testing inputs, real-world in-the-wild images created by generative models. Video demos interactable meshes can found project webpage: https://yiconghong.me/LRM.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....