Teleoperated vehicles are playing an increasingly important role in a variety of military functions. While advantageous
in many respects over their manned counterparts, these vehicles also pose unique challenges when it comes to safely
avoiding obstacles. Not only must operators cope with difficulties inherent to the manned driving task, but they must
also perform many of the same functions with a restricted field of view, limited depth perception, potentially disorienting
camera viewpoints, and significant time delays. In this work, a constraint-based method for enhancing operator
performance by seamlessly coordinating human and controller commands is presented. This method uses onboard
LIDAR sensing to identify environmental hazards, designs a collision-free path homotopy traversing that environment,
and coordinates the control commands of a driver and an onboard controller to ensure that the vehicle trajectory remains
within a safe homotopy. This system's performance is demonstrated via off-road teleoperation of a Kawasaki Mule in an
open field among obstacles. In these tests, the system safely avoids collisions and maintains vehicle stability even in the
presence of "routine" operator error, loss of operator attention, and complete loss of communications.
This paper presents a real-time motion estimation module for ground vehicles based on the fusion of monocular visual
odometry and low-cost inertial measurement unit data. The system features a novel algorithmic scheme enabling
accurate and robust scale estimation and odometry at high speeds. Results of multiple performance characterization
experiments (on rough terrain at speeds up to 20 mph and smooth roadways at speeds of up to 75 mph) are presented.
The prototype system demonstrates high levels of precision (relative distance error less than 1%, and less than 0.5% on
paved roads, yaw drift rate ~2 degrees per km) in multiple configurations, including various optics and vehicles.
Performance limitations, including those specific to monocular vision, are analyzed and directions for further
improvements are outlined.
The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the
absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are
detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character
recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene
geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing
coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign
images. In this paper, ESARR development progress will be reported on, including the design and architecture, image
processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based
directional road-sign detection and interpretation system will be described along with the challenges and progress in
overcoming them.
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few
dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles
(UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer
Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing
(HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE
HPC research is a real-time desktop simulation application under development by the authors that provides a portal into
the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This
VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables
analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to
interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages
rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia
visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and
customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations.
ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques
from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf
(COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several
initial applications of the system.
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances
in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual
UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control
system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for
one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in
operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single
operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual
servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI)
techniques from the entertainment software industry are being used to develop video-game style interfaces that require
little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive
interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less
burdensome than many current generation systems.
mand).
Biometrics has become an increasingly important part of the overall set of tools for securing a wide range of facilities, areas, information, and environments. At the core of any biometric verification/identification technique lies the development of the algorithm itself. Much research has been performed in this area to varying degrees of success, and it is well recognized within the biometrics community that substantial room for improvement exists. The focus of this paper is to describe ongoing biometrics algorithm development efforts by the authors. An overview of the data collection, algorithm development, and testing efforts is described. The focus of the research is to develop core algorithmic concepts that serve as the basis for robust techniques in both the face and speech modalities. A broad overview of the methodology is provided with some sample results.
The end goal is to have a robust, modular set of tools which can balance complexity and need for accuracy and robustness for a wide variety of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.