Combined visual and semantic property reconstruction of viewed objects using fMRI

X Zhang, B Devereux, A Clarke, L Tyler

Centre for Speech, Language and the Brain, University of Cambridge, United Kingdom
Contact: xueyuan@csl.psychol.cam.ac.uk

How do people process the meaning of objects? Previous reconstruction-based (“mind reading”) studies have focused on reconstructing visual information alone. But viewing real objects evokes not only visual processing but also semantic representations. In this study we focused on decoding semantic representations of objects to determine whether they can be reconstructed from fMRI data collected while subjects viewed a series of object images. A semantic feature model was constructed from feature vectors derived from McRae’s property norms, while a baseline visual model was acquired by projecting images onto quadrature Gabor wavelet pairs. We conducted leave-one-out cross-validation, where a linear model was fit at every voxel which predicts the voxel activity evoked by the left-out stimulus. In this way activity patterns are predicted from the encoding model. Using Bayesian methods, an a priori set of 148,454 images was used in reconstruction; reconstructions were calculated as the averaged top 100 a priori images having highest posterior probability. To quantify reconstruction quality, we calculated the correlation between visual and semantic features of the original images and their reconstructions. Average visual and semantic correlations of 0.37 and 0.41 respectively suggest that both visual and semantic properties of objects can be reconstructed from brain activation.

Up Home