As we move, we receive feedback from environmental information and internal self-motion cues (proprioception). This co-variation serves to calibrate our action system with respect to the environment and is integral in allowing us to know where we are within a body-scaled space. While the calibration established in the real world is robust enough to support walking without vision to a previously seen target, we propose that the action system needs to be recalibrated when scenes are from a virtual environment (VE). We will present results from experiments where subjects walked without vision to targets in briefly displayed scenes from virtual and real environments. The only available feedback from external sources was a single beep emitted at the end of a trial, implicitly signaling the target distance. Unlike performance in the real world, in the initial trials within the VE, subjects’ egocentric reference frame shifted in concert with the changing scene context. Over time, subjects became less dependent on the unreliable scene context and performance in the VE approached that in the real world. The change in behavior over time is consistent with subjects adopting a more consistent external cue (the beep) to calibrate their action systems.
Supported by NIH EY07839 to KAT.
This paper presents a neural network model to emulate the ability of the human visual system to detect changes in heading direction, i.e. curvilinear motion. The network consists of three layers. The input to the network is a two-dimensional velocity field, and the output is a signal representing the magnitude and the direction of the rotational component in the flow. The first layer of the network computes local differences vectors of the velocity field to define the orientation of the translational field lines. The second layer of the network extracts the instantaneous heading direction from the translational component of the velocity field. And the third layer determines the rotational component of the velocity field. The magnitude of perceived curvilinear motion is directly proportional to the magnitude of the rotational component. The simulation results match psychophysical data of four human subjects at both slow (2.0 m/s) and fast (26.4 m/s) locomotion speeds. The biological feasibility of this neural network is supported by finding in biological vision systems.