A search for statistically defensible evidence for repetitive visual scanning of pictorial stimuli is reviewed using experimental approaches developed in Professor Stark's Berkeley laboratory. Examples of visual scanning at the instant of perceptual organization and of scanning during information seeking will be presented including photographs from Stark's laboratory.
A custom, stereoscopic video camera was built to study the impact of decreased camera separation on a stereoscopically viewed, visual-manual task resembling some aspects of surgery. The camera’s field of view was matched to that of a stereoscopic laparoscope by adjusting focal length and viewing distance so that the viewer could see equivalent image content at a plane orthogonal to their view. This plane contained the point at which the left and right viewing axes converged. This geometry only exactly matches the images from both the laparoscope and the stereo-camera at this point. This condition was considered a useful approximation for a match between the two image sources. Twelve naive subjects and one of the experimenters were first trained in a ring placement task using the stereo-laparoscope and subsequently switched to the stereo-camera. It was used with differing camera separations ranging from 100% of the laparoscope’s separation to a biocular view corresponding to no separation. The results suggest that camera separation may be reduced 20-35% without appreciably degrading user performance. Even a 50% reduction in separation shows stereoscopically supported performance much better than the biocular condition. The results suggest that existing laparoscopes which use 5 mm camera separation may well be significantly miniaturized without causing substantial performance degradation.
During vergence eye movements, the effective separation between the two eyes varies because the nodal point of each eye is offset from the center of rotation. As a result, the projected distance of a binocularly presented virtual object changes as the observer converges and diverges. A model of eye and stimulus position illustrates that if an observer converges toward a binocular virtual stimulus that is fixed on the display, the projected stimulus will shift outward away from the observer. Conversely, if the observer diverges toward a binocular virtual stimulus that is fixed on the display, the projected stimulus will shift inward. For example, if an observer diverges from 25 cm to 300 cm, a binocular virtual stimulus projected at 300 cm will shift inward to 241 cm. Accurate depiction of a fixed stimulus distance in a binocular display requires that the stimulus position on the display surface should be adjusted in real- time to compensate for the observer's eye movements.
Excessive end-to-end latency and insufficient update rate continue to be major limitations of virtual environment (VE) system performance. Beginning from a typical baseline VE in which a spatial tracker is polled to deliver data via an RS-232 interface at each update of a single application program, we examined a series of hardware and software reconfigurations with the aim of reducing end-to-end latency and increasing update rate. These reconfigurations included: (1) multiple asynchronous UNIX processes communicating via shared memory; (2) continuous streaming rather than polled tracker operation; (3) multiple rather than single tracker instruments; and (4) higher bandwidth IEEE-488 parallel communication between tracker and computer. Starting from an average latency of 65 msec and an update rate of 20 Hz for a standard 1000 polygon test VE, our most successful implementation to date runs at 60 Hz (the maximum achievable with our graphics display hardware) with approximately 30 msec average latency. Because our equipment and architecture is based on widely available hardware (i.e., SGI computer, Polhemus Fastrak) and software (i.e., Sense8 WorldToolKit), our techniques and results are broadly applicable and easily transferable to other VE systems.
This paper describes a testbed and method for characterizing the dynamic response of the type of spatial displacement transducers commonly used in VE applications. The testbed consists of a motorized rotary swing arm that imparts known displacement inputs to the VE sensor. The experimental method involves a series of tests in which the sensor is displaced back and forth at a number of controlled frequencies that span the bandwidth of volitional human movement. During the tests, actual swing arm angle and reported VE sensor displacements are collected and time stamped. Because of the time stamping technique, the response time of the sensor can be measured directly, independent of latencies in data transmission from the sensor unit and any processing by the interface application running on the host computer. Analysis of these experimental results allows sensor time delay and gain characteristics to be determined as a function of input frequency. Results from tests of several different VE spatial sensors are presented here to demonstrate use of the testbed and method.
Proc. SPIE. 1666, Human Vision, Visual Processing, and Digital Display III
KEYWORDS: Visual analytics, Visualization, Optical inspection, Human vision and color perception, Cockpit displays, Chemical elements, Space operations, Virtual reality, Data communications, Nonlinear control
Pictorial display aids represent synthetic environments within which users interact with symbolic elements representing objects and processes in the real world. The design of these environments challenge researchers to understand the elements of the physical environment that make them predicable and understandable. By incorporating these aspects of the physical world, useful representation aids can improve the naturalness of their symbolic representation, geometric structure, and dynamic response.
Virtual environment interfaces to computer programs in several diverse application areas are currently being developed. The users of virtual environments will require many different methods to interact with the environments and the objects in them. This paper reports on our use of virtual menus as a method of interacting with virtual environments. Several aspects of virtual environments make menu interactions different from interactions with conventional menus. We review the relevant aspects of conventional menus and virtual environments, in order to provide a frame of reference for the design of virtual menus. We discuss the features and interaction methodologies of two different versions of virtual menus which have been developed and used in our lab. We also examine the problems associated with our original version, and the enhancements incorporated into our current version.