TailorMe: Self-Supervised Learning of an Anatomically Constrained Volumetric Human Shape Model

FOS: Computer and information sciences Computer Science - Graphics I.5.1 Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition I.3.0; I.5.1 Graphics (cs.GR) I.3.0
DOI: 10.48550/arxiv.2312.02173 Publication Date: 2024-04-30
ABSTRACT
AbstractHuman shape spaces have been extensively studied, as they are a core element of human shape and pose inference tasks. Classic methods for creating a human shape model register a surface template mesh to a database of 3D scans and use dimensionality reduction techniques, such as Principal Component Analysis, to learn a compact representation. While these shape models enable global shape modifications by correlating anthropometric measurements with the learned subspace, they only provide limitedlocalizedshape control. We instead register a volumetric anatomical template, consisting of skeleton bones and soft tissue, to the surface scans of the CAESAR database. We further enlarge our training data to the full Cartesian product of all skeletons and all soft tissues using physically plausible volumetric deformation transfer. This data is then used to learn an anatomically constrained volumetric human shape model in a self‐supervised fashion. The resultingTailorMemodel enables shape sampling, localized shape manipulation, and fast inference from given surface scans.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....