PVA: Pixel-aligned Volumetric Avatars
Amit Raj1
Michael Zollhöfer2
Tomas Simon2
Jason Saragih2
Shunsuke Saito2
James Hays1
Stephen Lombardi2

1 Georgia Institute of Technology
2 Facebook Reality Labs



Acquisition and rendering of photorealistic human heads is a highly challenging research problem of particular importance for virtual telepresence. Currently, the highest quality is achieved by volumetric approaches trained in a person-specific manner on multi-view data. These models better represent fine structure, such as hair, compared to simpler mesh-based models. Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters. While such architectures achieve impressive rendering quality, they can not easily be extended to the multi-identity setting. In this paper, we devise a novel approach for predicting volumetric avatars of the human head given just a small number of inputs. We enable generalization across identities by a novel parameterization that combines neural radiance fields with local, pixel-aligned features extracted directly from the inputs, thus side-stepping the need for very deep or complex networks. Our approach is trained in an end-to-end manner solely based on a photometric rerendering loss without requiring explicit 3D supervision.We demonstrate that our approach outperforms the existing state of the art in terms of quality and is able to generate faithful facial expressions in a multi-identity setting.


Avatars with different expressions

Paper and Supplementary Material

Amit Raj, Michael Zollhöfer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi.
PVA : Pixel-aligned Volumetric Avatars
(hosted on ArXiv)


This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project