Spatial filtering vs edge integration: comparing two computational models of lightness perception

T Betz1, M Maertens1, F A Wichmann2

1Modelling of Cognitive Processes, Berlin Institute of Technology / BCCN Berlin, Germany
2Neural Information Processing Group, University of Tübingen, Germany

Contact: torsten.betz@bccn-berlin.de

The goal of computational models of lightness perception is to predict the perceived lightness of any surface in a scene based on the luminance value at the corresponding retinal image location. Here, we compare two approaches that have been taken towards that goal: the oriented difference-of-Gaussian (ODOG) model [Blakeslee and McCourt, 1999, Vision Research, 39(26): 4361–4377], and a model based on the integration of edge responses [Rudd, 2010, Journal of Vision, 10(14): 1–37]. We reimplemented the former model, and extended it by replacing the ODOG filters with steerable pyramid filters [Simoncelli and Freeman, 1995, IEEE, ICIP proceedings, 3], making the output less dependent on the specific spatial frequencies present in the input. We also implemented Rudd's edge integration idea and supplemented it with an image-segmentation stage to make it applicable to more complex stimuli than the ones he considered. We apply both models to various stimuli that have been experimentally used to probe lightness perception (e.g. disk-annulus configurations, White's illusion, Adelson's checkerboard). The model outputs are compared relative to human lightness responses. The discrepancies between the models and the human data can be used to infer those model components that are critical to capture human lightness perception.

Up Home