To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance
systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions,
while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level
classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships
(AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical
machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting,
classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret
scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in-
the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat
detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the
manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.
Content-based video retrieval from archived image/video is a very attractive capability of modern intelligent video
surveillance systems. This paper presents an innovative Semantic-Based Video Indexing and Retrieval (SBVIR) software
toolkit to help users of intelligent video surveillance to easily and rapidly search the content of large video archives to
conduct video-based forensic and image intelligence. Tailored for maritime environment, SBVIR is suited for
surveillance applications in harbor, sea shores, or around ships. The system comprises two major modules: a video
analytic module that performs automatic target detection, tracking, classification, activities recognition, and a retrieval
module that performs data indexing, and information retrieval. SBVIR is capable of detecting and tracking objects from
multiple cameras robustly in condition of dynamic water background and illumination changes. The system provides
hierarchical target classification among a large ontology of watercraft classes, and is capable of recognizing a variety of
boat activities. Video retrieval is achieved with both query-by-keyword and query-by-example. Users can query video
content using semantic concepts selected from a large dictionary of objects and activities, display the history linked to a
given target/activity, and search for anomalies. The user can interact with the system and provide feedbacks to tune the
system for improved accuracy and relevance of retrieved data.
SBVIR has been tested for real maritime surveillance scenarios and shown to be able to generate highly-semantic
metadata tags that can be used during the retrieval to provide user with relevant and accurate data in real-time.
The effectiveness of autonomous munitions systems can be enhanced by transmitting target images to a man-in-the-loop
(MITL) as the system deploys. Based on the transmitted images, the MITL could change target priorities or conduct
damage assessment in real-time. One impediment to this enhancement realization is the limited bandwidth of the system
data-link. In this paper, an innovative pattern-based image compression technology is presented for enabling efficient
image transmission over the ultra-low bandwidth system data link, while preserving sufficient details in the
decompressed images for the MITL to perform the required assessments. Based on a pattern-driven image model, our
technology exploits the structural discontinuities in the image by extracting and prioritizing edge segments with their
geometric and intensity profiles. Contingent on the bit budget, only the most salient segments are encoded and
transmitted, therefore achieving scalable bit-streams. Simulation results corroborate the technology efficiency and
establish its subjective quality superiority over JPEG/JPEG2000 as well as feasibility for real-time implementation.
Successful technology demonstrations were conducted using images from surrogate seekers in an aircraft and from a
captive-carry test-bed system. The developed technology has potential applications in a broad range of network-enabled
weapon systems.
The ever increasing volumes and resolutions of remote sensing imagery have not only boosted the value of image-based analysis and visualization in scientific research and commercial sectors, but also introduced new challenges. Specifically, processing large volumes of newly acquired high-resolution imagery as well as fusing them
against existing imagery (for correction, update, and visualization) still remain highly subjective and labor-intensive
tasks, which has not been fully automated by the existing GIS software tools. This calls for the development of novel
computational algorithms to automate the routine image processing tasks involved in various remote sensing based
applications. In this paper, a suite of efficient and automated computational algorithms has been proposed and
developed to address the aforementioned challenge. It includes a segmentation algorithm to achieve the automatic
"cleaning" (i.e. segmenting out the valid pixels) of any newly acquired ortho-photo image, automatic feature point
extraction, image alignment by maximization of mutual information and finally smoothing/feathering the edges of the
imagery at the join zone. The proposed algorithms have been implemented and tested using practical large-scale GIS
imagery/data. The experimental results demonstrate the efficiency and effectiveness of the proposed algorithms and the
corresponding capability of fully automated segmentation, registration and fusion, which allows the end-user to bring
together image of heterogeneous resolution, projection, datum, and sources for analysis and visualization. The potential
benefits of the proposed algorithms include great reduction of the production time, more accurate and reliable results,
and user consistency within and across organizations.
Building footprint extraction from GIS imagery/data has been shown to be extremely useful in various urban planning
and modeling applications. Unfortunately, existing methods for creating these footprints are often highly manual and
rely largely on architectural blueprints or skilled modelers. Although there has been quite a lot of research in this area,
most of the resultant algorithms either remain unsuccessful or still require human intervention, thus making them
infeasible for practical large-scale image processing systems. In this work, we present novel LiDAR and aerial image
processing and fusion algorithms to achieve fully automated and highly accurate extraction of building footprint. The
proposed algorithm starts with initial building footprint extraction from LiDAR point cloud based on an iterative
morphological filtering approach. This initial segmentation result, while indicating locations of buildings with a
reasonable accuracy, may however produce inaccurate building footprints due to the low resolution of the LiDAR data.
As a refinement process, we fuse LiDAR data and the corresponding color aerial imagery to enhance the accuracy of
building footprints. This is achieved by first generating a combined gradient surface and then applying the watershed
algorithm initialized by the LiDAR segmentation to find ridge lines on the surface. The proposed algorithms for
automated building footprint extraction have been implemented and tested using ten overlapping LiDAR and aerial
image datasets, in which more than 300 buildings of various sizes and shape exist. The experimental results confirm the
efficiency and effectiveness of our fully automated building footprint extraction algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.