This paper presents a novel approach for synthesizing facial affect, which is based on our annotating 600,000 frames of the 4DFAB database in terms of valence and arousal. The input of this approach is a pair of these emotional state descriptors and a neutral 2D image of a person to whom the corresponding affect will be synthesized. Given this target pair, a set of 3D facial meshes is selected, which is used to build a blendshape model and generate the new facial affect. To synthesize the affect on the 2D neutral image, 3DMM fitting is performed and the reconstructed face is deformed to generate the target facial expressions. Last, the new face is rendered into the original image. Both qualitative and quantitative experimental studies illustrate the generation of realistic images, when the neutral image is sampled from a variety of well known databases, such as the Aff-Wild, AFEW, Multi-PIE, AFEW-VA, BU-3DFE, Bosphorus.
Kollias Dimitrios, Cheng Shiyang, Pantic Maja, Zafeiriou Stefanos
A4 Article in conference proceedings
Place of publication:
Computer Vision – ECCV 2018 Workshops. ECCV 2018, Munich, Germany, September 8-14, 2018, Proceedings, Part II
Kollias D., Cheng S., Pantic M., Zafeiriou S. (2019) Photorealistic Facial Synthesis in the Dimensional Affect Space. In: Leal-Taixé L., Roth S. (eds) Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11130. Springer, Cham, https://doi.org/10.1007/978-3-030-11012-3_36
Read the publication here: