Learning to detect genuine versus posed pain from facial expressions using residual generative adversarial networks

We present a novel approach based on Residual Generative Adversarial Network (R-GAN) to discriminate genuine pain expression from posed pain expression by magnifying the subtle changes in the face. In addition to the adversarial task, the discriminator network in R-GAN estimates the intensity level of the pain. Moreover, we propose a novel Weighted Spatiotemporal Pooling (WSP) to capture and encode the appearance and dynamic of a given video sequence into an image map. In this way, we are able to transform any video into an image map embedding subtle variations in the facial appearance and dynamics. This allows using any pre-trained model on still images for video analysis. Our extensive experiments show that our proposed framework achieves promising results compared to state-of-the-art approaches on three benchmark databases, i.e., UNBC-McMaster Shoulder Pain, BioVid Head Pain, and STOIC.

Authors:
Tavakolian Mohammad, Cruces Carlos Guillermo Bermudez, Hadid Abdenour

Publication type:
A4 Article in conference proceedings

Place of publication:
14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019, 14-18 May 2019, Lille, France

Keywords:
Emotion recognition, face recognition, image sequences, learning (artificial intelligence), neural nets, video signal processing

Published:

Full citation:
M. Tavakolian, C. G. Bermudez Cruces and A. Hadid, “Learning to Detect Genuine versus Posed Pain from Facial Expressions using Residual Generative Adversarial Networks,” 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France, 2019, pp. 1-8. doi: 10.1109/FG.2019.8756540

DOI:
https://doi.org/10.1109/FG.2019.8756540

Read the publication here:
http://urn.fi/urn:nbn:fi-fe2019121848691