Multisensory Integration of Scene Perception: Semantically Congruent Soundtrack Enhances Unconsciously Processed Visual Scene

J S Tan1, C-C Cheng2, P-C Lien2, S-L Yeh1

1Department of Psychology, National Taiwan University, Taiwan
2Taipei Municipal Jianguo High School, Taiwan

Contact: makgongtan@gmail.com

We examine whether the gist of natural scenes can be extracted unconsciously and be affected by a semantically congruent soundtrack. The continuous flash suppression paradigm was used to render a visual scene (restaurant or street) invisible while participants listened to a soundtrack (background sounds recorded in a restaurant or on a street). This paradigm also has the advantage of making the audio-visual semantic relationship opaque to avoid response bias. The contrast of the visual scene was increased gradually in one eye although was masked by dynamic Mondrians in the other eye. Participants were required to respond whenever they saw anything different from the Mondrians and indicate the location of the scene (top or bottom). The released-from-suppression time of correct localization was shorter when it was accompanied by a semantically congruent soundtrack rather than by an incongruent one (Experiment 1). Semantic congruency effects were eliminated by removing critical objects (e.g., dishes or cars) and leaving only the background (Experiment 2), or by presenting only these critical objects without the background (Experiment 3). This is the first study demonstrating unconscious processing of the gist of visual scenes—which occurs when both objects and background are included—and audio-visual integration for complex scenes.

Up Home