Over the last several years, the Naval Research Laboratory has been developing corrosion detection algorithms for assessing coatings conditions in tank and voids on US Navy Ships. The corrosion detection algorithm is based on four independent algorithms; two edge detection algorithms, a color algorithm and a grayscale algorithm. Of these four algorithms, the color algorithm is the key algorithm and to some extent drives overall algorithm performance. The four independent algorithm results are fused with other features to first generate an image level assessment of coatings damage. The image level results are next aggregated across a tank or void image set to generate a single coatings damage value for the tank or void being inspected. The color algorithm, algorithm fusion methodology and aggregation algorithm components are key to the overall performance of the corrosion detection algorithm. This paper will describe modifications that have been made in these three algorithm components to increase the corrosion detection algorithm’s overall operating range, to improve the algorithm’s ability to assess low coatings damage and to improve the accuracy of coatings damage classification at both the individual image as well as at the whole tank level.
In support of the Disparate Sensor Integration (DSI) Program a number of imaging sensors were fielded to determine the
feasibility of using information from these systems to discriminate between chemical and conventional munitions. The
camera systems recorded video from 160 training and 100 blind munitions detonation events. Two types of munitions
were used; 155 mm conventional rounds and 155 mm chemical simulant rounds. In addition two different modes of
detonation were used with these two classes of munitions; detonation on impact (point detonation) and detonation in the
air (airblasts). The cameras fielded included two visible wavelength cameras, a near infrared camera (peak responsivity
of approximately 1μm), a mid wavelength infrared camera system (3 μm to 5 μm) and a long wavelength infrared
camera system (7.5 μm to 13 μm).
Our recent work has involved developing Linguistic-Fuzzy Classifiers for performing munitions detonation
classification with the DSI visible and infrared imaging sensors data sets. In this initial work, the classifiers were
heuristically developed based on analyses of the training data features distributions. In these initial classification
systems both the membership functions and the feature weights were hand developed and tuned. We have recently
developed new methodologies to automatically generate membership functions and weights in Linguistic-Fuzzy
Classifiers. This paper will describe this new methodology and provide an example of its efficacy for separating
munitions detonation events into either air or point detonation. This is a critical initial step in achieving the overall goal
of DSI; the classification of detonation events as either chemical or conventional. Further, the detonation mode is
important as it significantly effects the dispersion of agents. The results presented in this paper clearly demonstrate that
the automatically developed classifiers perform as well in this classification task as the previously developed and
demonstrated empirically developed classifiers.
Over the last several years, the Naval Research Laboratory has developed video based systems for inspecting tanks
(ballast, potable water, fuel, etc.) and other voids on ships. Using these systems, approximately 15 to 30 images of the
coated surfaces of the tank or void being inspected are collected. A corrosion detection algorithm analyzes the
collected imagery. The corrosion detection algorithm output is the percent coatings damage in the tank being inspected.
The corrosion detection algorithm uses four independent algorithms that each separately assesses the coatings damage in
each analyzed image. The independent algorithm results from each image are fused with other available information to
develop a single coatings damage value for each of the analyzed images. The damage values for all of the images
analyzed are next aggregated in order to develop a single coatings damage value for the complete tank or void being
inspected. The results from this Corrosion Detection Algorithm have been extensively compared to the results of human
performed inspections over the last two years.
Improved situational awareness is a primary goal for the Objective Force. Knowing where the enemy is and what are the threats to his troops provides the commander with the information he needs to plan his mission and provide his forces with maximum protection from the variety of threats that are present on the battlefield.
Sensors play an important role in providing critical information to enhance situational awareness. The sensors that are used on the battlefield include, among others, seismic, acoustic, and cameras in different spectral ranges of the electro-magnetic spectrum. These sensors help track enemy movement and serve as part of an intrusion detection system. Characteristically these sensors are relatively cheap and easy to deploy.
Chemical and biological agent detection is currently relegated to sensors that are specifically designed to detect these agents. Many of these sensors are collocated with the troops. By the time alarm is sounded the troops have already been exposed to the agent. In addition, battlefield contaminants frequently interfere with the performance of these sensors and result in false alarms. Since operating in a contaminated environment requires the troops to don protective garments that interfere with their performance we need to reduce false alarms to an absolute minimum.
The Edgewood Chemical and Biological Center (ECBC) is currently conducting a study to examine the possibility of detecting chemical and biological weapons as soon as they are deployed. For that purpose we conducted a field test in which the acoustic, seismic and electro-magnetic signatures of conventional and simulated chemical / biological artillery 155mm artillery shells were recorded by an array of corresponding sensors. Initial examination of the data shows a distinct differences in the signatures of these weapons.
In this paper we will provide detailed description of the test procedures. We will describe the various sensors used and describe the differences in the signatures generated by the conventional and the (simulated) chemical rounds. This paper will be followed by other papers that will provide more details information gained by the various sensors and describe how fusing the data enhance the reliability of the CB detection process.
Over the last several years, the Naval Research Laboratory has developed video based systems for inspecting tanks (ballast, potable water, fuel, etc.) and other voids on ships. Over this past year, we have extensively utilized the Insertable Stalk Inspection System (ISIS) to perform inspections of shipboard tanks and voids. This system collects between 15 and 30 images of the tank or void being inspected as well as a video archive of the complete inspection process. A corrosion detection algorithm analyzes the collected imagery. The corrosion detection algorithm output is the percent coatings damage in the tank being inspected. The corrosion detection algorithm consists of four independent algorithms that each separately assesses the coatings damage in each of the images that are analyzed. The algorithm results are fused to attain a single coatings damage value for each of the analyzed images. The damage values for each of the images are next aggregated in order to develop a single coatings damage value for the tank being inspected.
This paper concentrates on the methods used to fuse the results from the four independent algorithms that assess corrosion damage at the individual image level as well as the methods used to aggregate the results from multiple images to attain a single coatings damage level. Results from both calibration tests and double blind testing are provided in the paper to demonstrate the advantages of the video inspection systems and the corrosion detection algorithm.
In support of the Disparate Sensor Integration (DSI) Program a number of imaging sensors were fielded to determine the feasibility of using information from these systems to discriminate between chemical simulant and high explosives munitions. The imaging systems recorded video from 160 training and 100 blind munitions detonation events. Two types of munitions were used; 155 mm high explosives rounds and 155 mm chemical simulant rounds. In addition two different modes of detonation were used with these two classes of munitions; detonation on impact (point detonation) and detonation prior to impact (airblasts). The imaging sensors fielded included two visible wavelength cameras, a near infrared camera, a mid wavelength infrared camera system and a long wavelength infrared camera system.
Our work to date has concentrated on using the data from one of the visible wavelength camera systems and the long wavelength infrared camera system. The results provided in this paper clearly show the potential for discriminating between the two types of munitions and the two detonation modes using these camera data. It is expected that improved classification robustness will be achieved when the camera data described in this paper is combined with results and discriminating features generated from some of the other camera systems as well as the acoustic and seismic sensors also fielded in support of the DSI Program.
The paper will provide a brief description of the camera systems and provide still imagery that show the four classes of explosives events at the same point in the munitions detonation sequence in both the visible and long wavelength infrared camera data. Next the methods used to identify frames of interest from the overall video sequence will be described in detail. This will be followed by descriptions of the features that are extracted from the frames of interest. A description of the system that is currently used for performing classification with the extracted features and the results attained on the blind test data set are next described. The work performed to date to fuse information from the visible and long wavelength infrared imaging sensors including the benefits realized are next described. The paper concludes with a description of our ongoing work to fuse imaging sensor data.
Coatings damage in shipboard tanks is presently assessed using Certified Coatings Inspectors. Prior to a coatings inspector entering a tank, the tank must be emptied and certified gas free. These requirements combined with the limited number of certified coatings inspectors available at shipyards and Naval Bases significantly increases the cost and the logistical requirements associated with performing shipboard tank inspections. There is additionally significant variation in damage assessments made by different inspectors. To overcome these difficulties, the Naval Research Laboratory has developed two video inspection systems that obviate requirements for both certifying tanks gas free and for emptying the tank prior to performing an inspection. These systems also obviate requirements for inspector presence during tank inspections. The Naval Research Laboratory has also developed an automatic corrosion detection algorithm. The corrosion detection algorithm currently employs two independent algorithms that individually assess the tank coatings damage. The independent damage assessments are than fused to attain a single coatings damage value. In testing performed to date, it has been shown that the corrosion detection algorithm significantly reduces the effect of inspector-to-inspector variability and provides an accurate assessment of tank coatings damage. This in turn makes it significantly easier to prioritize ship maintenance.
KEYWORDS: Corrosion, Inspection, Video, Wavelets, Detection and tracking algorithms, Cameras, Edge detection, Imaging systems, Digital video recorders, Binary data
Over the past several years, the Naval Research Laboratory has been developing video inspection systems for assessing the coatings condition in shipboard ballast tanks. Two prototype systems have been configured and are presently being utilized to perform video inspections of dry and filled ballast tanks. These systems are described in this paper. The large size and low level lighting associated with this application results in 'noisy' imagery. A wavelet based de-noising method has been developed that removes the noise in the video imagery while maintaining other edges important to corrosion detection. Specific examples that demonstrate the efficacy of the de-noising methods are provided. Wavelet edge detection methods are then applied to the de-noised imagery to identify both regions of potential rust and the spatial distribution of rust. Additional methodologies are then utilized for final corrosion classification. The paper will provide examples of imagery collected in shipboard ballast tanks and examples of applying the automatic corrosion detection algorithms. These examples demonstrate the algorithms ability to work with 'noisy' imagery and to ignore objects in the imagery such as ladders and pipes. They also demonstrate the robustness of the developed automatic corrosion detection algorithms.
A method for the automatic detection of tanks and other vehicles in infrared imagery will be described. First regions of interest in the infrared imagery are identified using a novel method that combines histogram specification, applying a fixed grayscale threshold to the image, and performing image labeling on the thresholded image. Features are next extracted from identified regions of interest. The features are input to a fuzzy inference system. The output of the fuzzy inference system is a target confidence value that is used to classify targets at objects of interest or clutter.
A robust method of performing information fusion in processing ground penetrating radar (GPR) sensor data in landmine detection will be described. The method involves running multiple automatic target recognition algorithms (ATRs) in parallel on the GPR data. The outputs from each of the ATRs are spatially correlated and a feature set for each potential radar target is automatically generated. The feature set is provided as input to Mamdani style fuzzy inference systems. The fuzzy inference systems' output is a mine confidence value. The major advantage of this technique is that it provides consistent mine detection performance independent of road type, GPR hardware settings, and ATR setup parameters. This paper will first describe the individual ATRs and the process of spatially correlating target reports and generating a feature set. This will be followed by a description of the fuzzy inference system used for target classification. THe paper will conclude with test result from various Fort AP Hill calibration mine lanes.
Algorithms for detecting land mines using the GEO-CENTERS Energy Focusing Ground Penetrating Radar (EFGPR) are presented. Key elements of the system include normalization, down- and cross-track feature extraction, fuzzy set membership based confidence assignment, and false alarm testing via transition and number of hyperbolae features. The system has been implemented in real-time in the GEO- CENTERS Vehicle Mounted Mine Detection System and can be used to perform real-time land mine detection or to analyze data stored to disk. Results are presented on calibration lane data from Aberdeen Proving Grounds, Maryland and the Energetic Materials Research and Testing Center, New Mexico in the summer of 1998.
Researchers at the Idaho National Engineering Laboratory with their industrial CRADA partner GEO-CENTERS demonstrated a fiber optic based VOC sensor at the Army Environmental Center technology demonstration at Dover Air Force Base. The sensor used during the demonstration was a single fiber optic cable coupled to an in situ sensor element contained in a cone penetrometer tip. The sensor's fluorescence response was measured at the surface using an optical breadboard-based instrument. Results from this demonstration showed that the sensor provided semi-quantitative results for total VOCs comparable to the historical values of VOCs. In addition, the demonstration identified several technical challenges for improvement of the sensor. This paper describes the analytical properties of the reversible sensing materials, construction of an improved sensor system, and the planned demonstration of the modified in- situ VOC sensor system. This sensor system is tentatively scheduled for demonstration at the Army Environmental Center's Aberdeen Proving Ground Test site. Improvements to the VOC sensor system include an optical configuration that will correct for soil matrix interferences and multiple sensing substrates to learn whether VOC selectivity can be achieved.
Fiber optic chemical sensors are being developed for on-line monitoring of gases and liquids. The sensors utilize novel porous polymer or glass optical fibers in which selective chemical reagents have been immobilized. These reagents react with the analyte of interest resulting in a change in the optical properties of the sensor. These sensors (or optrodes) are particularly suited to in-situ detection of atmospheric trace contaminants and dissolved gases and chemicals, as may be required for environmental monitoring. Sensors have been demonstrated for low part-per-billion level detection of aromatic hydrocarbons, hydrazines and ethylene. Sensors have also been demonstrated for carbon monoxide ammonia, and humidity. Also, relevant to groundwater monitoring is the development of an integrated pH optrode system for the pH range 4 - 8, with additional optrodes for lower pH ranges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.