Cross-modal adaptation on facial expression perception

X Wang, W Lau, A Hayes, H Xu

Division of Psychology, Nanyang Technological University, Singapore
Contact: xuhong@ntu.edu.sg

While visual adaptation is well explored, relatively few studies have examined cross-modal adaptation (Fox & Barton, 2007). Here, we investigate whether adaptation to an auditory signal can bias the perception of facial expression. We adapted participants to spoken sentences with a "happy" content and voice, and we measured judgments of facial emotion (auditory->visual). We found no significant aftereffect. In a second experiment, we adapted subjects to the "happy" spoken sentences together with a happy/sad face, and tested on facial expression judgment (auditory + visual -> visual). We also measured simple visual adaptation (visual->visual). Again, we found no increment/decrement aftereffect as a result of exposure to the additional auditory signal. However, we found that reaction time can be reduced by the auditory signal. This reduction depends on the co-presented visual signal. In happy-face adaptation, the reduction is significant when the test faces are happy; in sad-face adaptation, the reduction in reaction time occurs when the test faces are sad: a priming effect. These findings suggest that instead of a cross-modal aftereffect by adaptation to an auditory signal, sound plays a role as a prime, and the effect of priming depends on the state (happy/sad) of the other mode (visual) during adaptation.

Up Home