Integration of kinematic components in the perception of emotional facial expressions

C Curio1, E Chiovetto2, M A Giese3

1Department Human Perception, Cognition and Action, Max Planck Institute Biological Cybernetics, Germany
2Dept. Cognitive Neurology, Comp. Sensomotoric, HIH,CIN, University Clinic Tuebingen, Germany
3Computational Sensomotorics, HIH,CIN,BCCN, University Clinic Tuebingen, Germany

Contact: cristobal.curio@gmail.com

The idea that complex facial or body movements are composed of simpler components (usually referred to as ‘movement primitives’ or ‘action units’) is common in motor control (Chiovetto et al. 2010) as well as in the study of facial expressions (Ekman & Friesen, 1978). However, such components have rarely been extracted from real facial movement data. METHODS: We estimated spatio-temporal components that capture the major part of the variance of dynamic facial expressions, using a motion retargeting model for 3D facial animation (Curio et al, 2010) and applying dimension reduction methods (NMF and anechoic demixing). The estimated components were used to generate artificial stimuli, assessing the minimal required number of components in a perceptual Turing test, and their contributions to expression classification and expressiveness ratings. RESULTS: For an anechoic mixing model two components were sufficient for perfect reconstruction of the original expression. Often one component is sufficient for classification, while ratings tend to depend gradually on two, or more components. Acknowledgements: Supported by European Commission grants FP7-ICT TANGO 249858, AMARSi 248311, FP 7-PEOPLE-2011-ITN (Marie Curie): ABC PITN-GA-011-290011, Deutsche Forschungsgemeinschaft (DFG) GZ: CU 149/1-2, DFG GI 305/4-1, KA 1258/15-1, and the German Federal Ministry of Education and Research: BMBF; FKZ: 01GQ1002A.

Up Home