InvertAvatar: Incremental GAN Inversion
for Generalized Head Avatars

 

Xiaochen Zhao*, Jingxiang Sun*, Lizhen Wang, Jinli Suo, and Yebin Liu

Tsinghua University

SIGGRAPH 2024 Conference

[Paper] [Code] [Video]

 

Abstract

While high fidelity and efficiency are central to the creation of digital head avatars, recent methods relying on 2D or 3D generative models often experience limitations such as shape distortion, expression inaccuracy, and identity flickering. Additionally, existing one-shot inversion techniques fail to fully leverage multiple input images for detailed feature extraction. We propose a novel framework, \textbf{Incremental 3D GAN Inversion}, that enhances avatar reconstruction performance using an algorithm designed to increase the fidelity from multiple frames, resulting in improved reconstruction quality proportional to frame count. Our method introduces a unique animatable 3D GAN prior with two crucial modifications for enhanced expression controllability alongside an innovative neural texture encoder that categorizes texture feature spaces based on UV parameterization. Differentiating from traditional techniques, our architecture emphasizes pixel-aligned image-to-image translation, mitigating the need to learn correspondences between observation and canonical spaces. Furthermore, we incorporate ConvGRU-based recurrent networks for temporal data aggregation from multiple frames, boosting geometry and texture detail reconstruction. The proposed paradigm demonstrates state-of-the-art performance on one-shot and few-shot avatar animation tasks.

 

 

Fig 1. Given single or more source images, our method rapidly reconstructs photorealistic 3D facial avatars under one second, enabling precise control over full-head rotations and subtle expressions like lopsided grins and eye gazes.

 


Results

 

 

 

 

 

 


Technical Paper


Demo Video


Citation

 

@article{zhao2023invertavatar,
title={InvertAvatar: Incremental GAN Inversion for Generalized Head Avatars},
author={Zhao, Xiaochen and Sun, Jingxiang and Wang, Lizhen and Suo, Jinli and Liu, Yebin},
journal={arXiv preprint arXiv:2312.02222},
year={2023}
}


Acknowlegements

 

This paper is supported by National Key R&D Program of China (2022YFF0902200), the NSFC project No.62125107 and No.61827805.