An Model for Non-Retinotopic Processing

A Clarke1, H Ogmen2, M Herzog1

1Laboratory of Psychophysics, École Polytechnique Fédérale de Lausanne, Switzerland
2Department of Electrical and Computer Engineering, University of Houston, TX, United States

Contact: aaron.clarke@epfl.ch

The visual system transforms the retinal image of moving objects into an object-centered reference frame. For example, a person in a moving train appears to walk slowly, and not with the added speed of the train, i.e., train speed is discounted. Object-centered motion cannot easily be explained by classical motion models because they can only pick out retinotopic motion. We propose an alternative, two-step model in which motion is computed in a nested, hierarchical fashion. First, we compute the main object motion (e.g. the train), forming edge-based objects using the Gestalt grouping principles of proximity and good continuation. Within this reference-frame, we then compute the motion of other elements/objects (e.g., the person in the train). To this end, the model tracks the objects and parts across time, discounting for the objects’ motions when computing their parts’ motions. Using this simple procedure, our current model outperforms all prior non-retinotopic processing models. As an example, we show how retinotopic motion, non-retinotopic motion, and the transition between the two can be explained with the Ternus-Pikler Display (TPD).

Up Home