Yuelang Xu

ORCID: 0009-0001-6834-8199
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • 3D Shape Modeling and Analysis
  • Face recognition and analysis
  • Facial Nerve Paralysis Treatment and Research
  • Generative Adversarial Networks and Image Synthesis
  • Human Pose and Action Recognition
  • Computer Graphics and Visualization Techniques
  • Morphological variations and asymmetry
  • Advanced Numerical Analysis Techniques
  • Advanced Vision and Imaging
  • 3D Surveying and Cultural Heritage
  • Medical Image Segmentation Techniques

Tsinghua University
2020-2024

Simon Fraser University
2020

10.1109/cvpr52733.2024.00189 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024-06-16

With NeRF widely used for facial reenactment, recent methods can recover photo-realistic 3D head avatar from just a monocular video. Unfortunately, the training process of NeRF-based is quite time-consuming, as MLP in inefficient and requires too many iterations to converge. To overcome this problem, we propose AvatarMAV, fast reconstruction method using Motion-Aware Neural Voxels. AvatarMAV first model both canonical appearance decoupled expression motion by neural voxels avatar. In...

10.1145/3588432.3591567 preprint EN cc-by-nc-sa 2023-07-19

Existing approaches to animatable NeRF-based head avatars are either built upon face templates or use the expression coefficients of as driving signal. Despite promising progress, their performances heavily bound by power and tracking accuracy templates. In this work, we present LatentAvatar, an expressive neural avatar driven latent codes. Such codes learned in end-to-end self-supervised manner without templates, enabling our method get rid issues. To achieve this, leverage a NeRF learn...

10.1145/3588432.3591545 preprint EN cc-by-nc-sa 2023-07-19

We introduce an end-to-end learnable technique to robustly identify feature edges in 3D point cloud data. represent these as a collection of parametric curves (i.e.,lines, circles, and B-splines). Accordingly, our deep neural network, coined PIE-NET, is trained for inference edges. The network relies on "region proposal" architecture, where first module proposes over-complete edge corner points, second ranks each proposal decide whether it should be considered. train evaluate method the ABC...

10.48550/arxiv.2007.04883 preprint EN other-oa arXiv (Cornell University) 2020-01-01

One crucial aspect of 3D head avatar reconstruction lies in the details facial expressions. Although recent NeRF-based photo-realistic methods achieve high-quality rendering, they still encounter challenges retaining intricate expression because overlook potential specific variations at different spatial positions when conditioning radiance field. Motivated by this observation, we introduce a novel Spatially-Varying Expression (SVE) conditioning. The SVE can be obtained simple MLP-based...

10.1609/aaai.v38i5.28256 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Creating high-fidelity 3D human head avatars is crucial for applications in VR/AR, digital human, and film production. Recent advances have leveraged morphable face models to generate animated from easily accessible data, representing varying identities expressions within a low-dimensional parametric space. However, existing methods often struggle with modeling complex appearance details, e.g., hairstyles, suffer low rendering quality efficiency. In this paper we introduce novel approach,...

10.48550/arxiv.2407.15070 preprint EN arXiv (Cornell University) 2024-07-21

Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains great challenge under lightweight sparse view setups. In this paper, we propose Gaussian Head Avatar represented by controllable Gaussians for avatar modeling. We optimize the neutral and fully learned MLP-based deformation field to capture complex expressions. The two parts benefit each other, thereby our method can model fine-grained dynamic details while ensuring expression accuracy. Furthermore,...

10.48550/arxiv.2312.03029 preprint EN other-oa arXiv (Cornell University) 2023-01-01

One crucial aspect of 3D head avatar reconstruction lies in the details facial expressions. Although recent NeRF-based photo-realistic methods achieve high-quality rendering, they still encounter challenges retaining intricate expression because overlook potential specific variations at different spatial positions when conditioning radiance field. Motivated by this observation, we introduce a novel Spatially-Varying Expression (SVE) conditioning. The SVE can be obtained simple MLP-based...

10.48550/arxiv.2310.06275 preprint EN other-oa arXiv (Cornell University) 2023-01-01

With NeRF widely used for facial reenactment, recent methods can recover photo-realistic 3D head avatar from just a monocular video. Unfortunately, the training process of NeRF-based is quite time-consuming, as MLP in inefficient and requires too many iterations to converge. To overcome this problem, we propose AvatarMAV, fast reconstruction method using Motion-Aware Neural Voxels. AvatarMAV first model both canonical appearance decoupled expression motion by neural voxels avatar. In...

10.48550/arxiv.2211.13206 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01
Coming Soon ...