Current training simulators for police officers and soldiers lack two critical qualities for establishing a compelling sense
of immersion within a virtual environment: a strong disincentive to getting shot, and accurate feedback about the bodily
location of a shot. This research addresses these issues with hardware architecture for a Tactical Tactile Training Vest
(T3V). In this study, we have evaluated the design space of impact “tactors” and present a T3V prototype that can be
This research focuses on determining the optimal design parameters for creating maximum tactor hitting energy. The
energy transferred to the projectile directly relates to the quality of the disincentive. The complete T3V design will
include an array of these tactors on front and back of the body to offer accurate spatial feedback.
The impact tactor created and tested for this research is an electromagnetic projectile launcher, similar to a solenoid, but
lower profile and higher energy. Our best tactor produced projectile energy of approximately 0.08 Joules with an
efficiency at just above 0.1%. Users in an informal pilot study described the feeling as "surprising," "irritating," and
"startling," suggesting that this level of force is approaching our target level of disincentive.
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360-
degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these
camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views,
two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described
using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of
how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of
interfaces for surveillance and remote system teleoperation.
Stereoscopic displays are an increasingly prevalent tool for experiencing virtual environments, and the inclusion of
stereo has the potential to improve distance perception within the virtual environment. When multiple users
simultaneously view the same stereoscopic display, only one user experiences the projectively correct view of the virtual
environment, and all other users view the same stereoscopic images while standing at locations displaced from the center
of projection (CoP). This study was designed to evaluate the perceptual distortions caused by displacement from the
CoP when viewing virtual objects in the context of a virtual scene containing stereo depth cues. Judgments of angles
were distorted after leftward and rightward displacement from the CoP. Judgments of object depth were distorted after
forward and backward displacement from the CoP. However, perceptual distortions of angle and depth were smaller
than predicted by a ray-intersection model based on stereo viewing geometry. Furthermore, perceptual distortions were
asymmetric, leading to different patterns of distortion depending on the direction of displacement. This asymmetry also
conflicts with the predictions of the ray-intersection model. The presence of monocular depth cues might account for
departures from model predictions.
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for
reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training
environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if
co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing
hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time
transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to
empower LVC interaction in a reconfigurable, mixed reality environment.
This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the
dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge
technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and
virtually through commercial and developed game engines. Evaluation involving military trained personnel found this
system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the
battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server
process all live and virtual entity data from system components to create a cohesive virtual world across all distributed
simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and
virtual immersive environments for training in real-time across many distributed systems.