Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth . Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.
In a geographic information system (GIS), suitability analysis is used to model the spatial distribution of suitability within a region of interest with regard to a planning goal. This analysis is based on the combination of multiple geospatial source datasets, which spatially overlap, and each encodes a factor that contributes a certain weight to the overall suitability. “Possibility space” refers to an event space that represents all possible outcomes of the suitability analysis. This paper proposes an interactive possibility space for real-time visualization and exploration with a goal to help understand meaningful relationships between variable combinations and the suitability outcomes. A case study for siting wind farm locations in northwest Iowa is presented to demonstrate the practical application and usefulness of the possibility space.
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360-
degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these
camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views,
two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described
using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of
how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of
interfaces for surveillance and remote system teleoperation.