BlendFields: Few-Shot Example-Driven Facial Modeling

Comparison of our approach to VolTeMorph.

Abstract

Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.

Publication
In 2023 Conference on Computer Vision and Pattern Recognition Proceedings
Kacper Kania
Kacper Kania
PhD Student

4th year PhD Student in Neural Rendering, supported by Microsoft Research PhD funding programme

Tomasz Trzciński
Tomasz Trzciński
Principal Investigator

Related