We present a framework for the study of active vision, i.e., the functioning of the visual system during actively
self-generated body movements. In laboratory settings, human vision is usually studied with a static observer
looking at static or, at best, dynamic stimuli. In the real world, however, humans constantly move within dynamic
environments. The resulting visual inputs are thus an intertwined mixture of self- and externally-generated
movements. To fill this gap, we developed a virtual environment integrated with a head-tracking system in which
the influence of self- and externally-generated movements can be manipulated independently. As a proof of
principle, we studied perceptual stationarity of the visual world during lateral translation or rotation of the head.
The movement of the visual stimulus was thus parametrically tethered to self-generated movements. We found
that estimates of object stationarity were less biased and more precise during head rotation than translation.
In both cases the visual stimulus had to partially follow the head movement to be perceived as immobile. We
discuss a range of possibilities for our setup among which the study of shape perception in active and passive
conditions, where the same optic flow is replayed to stationary observers.