The National Geographic Society has used its first set of 3-D video cameras to produce tapes of deep sea creatures from down further than a mile. We are presently building a second miniaturized set of cameras to be mounted on ROVs. The tapes are being introduced to the scientific community and used by the Geographic in educational displays. The StereoGraphic system that we employ will display both video or computer CAD material. We are using this second ability to produce 3-D video stereo of a wire model of a shipwreck surveyed in situ with a SHARPS system. We are looking at taking this data and producing a hologram.
Multimirror Quartzline Lamps are extremely versatile and effective for nonconventional imaging requirements such as high-speed photo and video instrumentation and high-magnification imaging. The lamps' versatility though, is not limited to conventional environments. Many research experiments and projects require a high-pressure environment. Continuous photographic data acquisition in a high-pressure vessel requires wall penetrations and creates design problems as well as potential failure sites. Underwater photography adds the extra consideration of a liquid. This report expands upon the basic research presented in "Performance of Multimirror Quartzline Lamps in High-Pressure Environments" (NASA TM-83793, Ernie Walker and Howard Slater, 1984). The report provides information to professional industrial, scientific, and technical photographers as well as research personnel on the survivability of lighting a multimirror quartzline lamp in a nonconventional high-pressure underwater environment. Test results of lighted ELH 300 W multimirror quartzline lamps under high-pressure conditions are documented and general information on the lamp's intensity (footcandle output), cone of light coverage, approximate color temperature is provided. Continuous lighting considerations in liquids are also discussed.
Over the past seven years, the requisite technology for high performance intensified charge coupled device (ICCD) cameras has evolved. The present maturity of this technology provides a tool admirably suited to underwater imaging. This paper analytically addresses the requirements for underwater imaging devices, the present state of the ICCD art, and denotes directions for advanced development.
The last few years have seen significant advances in viewing technology. The solid state sensor (CCD, CID, and MOS) devices, and second generation Image Intensifiers, along with parallel advances in circuit components, have contributed to smaller, more reliable, and better performing viewing systems. Solid state sensors have no geometric distortion or lag, and require less complex drive circuits than their tube type counterparts. Furthermore, making use of the small, surface mount components available today, viewing systems can fit into the smallest of available space. Sensitivity can be increased significantly by coupling a second generation image intensifier to the input of the solid state sensor. The increased sensitivity is attained with very little consequence as far as size or power. The volume of a typical 18mm image intensifier is about 5 cu. in., the power required is less than 100 mW, but the gain in sensitivity is about 4 orders of magnitude. This means the necessary lighting requirement can be reduced significantly, and viewing range can be increased. Further increases in viewing range can be obtained with gated image intensifiers and range gating techniques. A new low light level solid state camera is described, embracing some of these advances. It uses a frame transfer CCD coupled to a second generation image intensifier in a potted assembly. Extensive use of surface mount components and carefully planned circuit partitioning allow it a greater degree of packaging flexibility than traditional designs. There is a general discussion of image intensifiers, solid state sensors and methods of coupling. Sensitivity, resolution, and underwater application performance are addressed.
Low Light Level (LLL) TV cameras, based on the association between image intensifier (II) tubes and solid state image sensors (CCD), add to the extreme performance of electronic tubes, the large system flexibility offered by CCD sensors (video signal). Various combinations of image intensifier tubes and CCDs are investigated. First generation II/CCD appears suitable for many applications where the light level is not too low (> 10-5 lux on the photocathode), but very low light levels require second or third generation devices. Electron bombarded CCDs, not commercially available yet, gives promising results in laboratory experiments. Their very high gain (1,500 at 10 keV) and its very low fluctuations allow both counting and integrating detection modes. Cooled CCDs exhibit rather poor performance in TV mode but their high detectivity them very suitable for "still picture" imaging. An original application, with detectivity improved, has been carried out in a two-color LLL TV camera. This prototype is based on two 2nd generation II/CCD systems respectively sensitive to the visible and the near IR wavelength range. It takes advantage of the variation of the spectral reflection coefficient of materials to enhance the contrast by image coloration.
The major range limitation of artificially illuminated underwater television viewing systems is backscattered illuminant. For more than twenty-five years, experiments have been conducted to perform range-gating to defeat backscatter. Currently available off-the-shelf hardware makes the implementation of range-gated viewing much more practical than it was previously. This paper analyzes requisite system parameters and describes a developmental system employing a doubled Nd-YAG laser and a gated Intensified Charge Coupled Device (ICCD) camera.
A prototype underwater laser scanner imaging system has been built and tested in a laboratory test tank. Simultaneous tests were conducted with two state of the art commercial underwater television cameras to quantify the performance of the prototype scanner relative to conventional imaging systems. Tests were conducted with the clarity of the water adjusted to match the clarity of clear deep ocean water, beam attenuation coefficient c = 0.10/meter, and the clarity of typical coastal water, c = 0.39/meter. The scanner system used a 40 milliwatt (combined 488 nanometer and 514.5 nanometer) Argon ion laser as the light source and a 2 inch diameter photo multiplier tube (PMT) as the detector. The PMT had an unrestricted field view. No scan synchronization, range gating, or other advanced techniques were used.
The United States Navy is interested in underwater imaging for obvious reasons, National defense, reconnaissance/surveillance, research and development. Since the early 1950's, Navy studies in underwater imaging have led to improvements in optic technologies and camera systems providing valuable documentation tools for the Fleet. Today, a specialized team of qualified underwater photographers continue to place itself in position to see things engineers, designers, and fleet commanders are not able to see, providing an often unique view of our underwater environment.
This paper constitutes an update on our efforts to develop an underwater laser-based imaging system (UWLIS). The work is being performed under contract from the Naval Sea Systems Command Office of Salvage and Diving (NAUSEA/00C) in order to provide instrumentation that will improve the visibility range available to deep-ocean (1500-6000 m) submersible vehicles during ocean-floor search-and-salvage operations. In general, these submersibles are remotely operated vehicles (ROV) that currently employ high-intensity floodlights and low-light-level TV cameras to produce video images of the seafloor, which are relayed to the mother ship to allow target identification. Often, these floodlight-based systems require that the ROV come within 6 to 10 m of the target in order to positively identify it. This poses both a risk of damaging the vehicle on outcropping seafloor terrain features and an increase in mission cost due to the time lost on maneuvering to identify false targets. Given that salvage-operation costs typically range from $1000 to $3000 per hour, a system that would improve the visibility range from 10 to 100 m would save thousands of dollars and greatly increase the probability of success of these missions.
Scattering of light in the ocean may make the application of structured light ranging methods difficult. An analysis is presented which approximates the effect of backscatter on the signal-to-noise ratio at the camera. Theory as well as laboratory observations indicate that a single scanning light stripe can be used to avoid a substancial part of the backscatter.
Although kinematic measurements can be made underwater using traditional electro-mechanical transducers, the additional complexity introduced by an underwater environment can be prohibitive. Further, many activities which occur underwater defy quantification by traditional methods. Optical techniques are another avenue which can be considered. Various optical systems have been devised for quantifying motion, but none has addressed the underwater problem in a general fashion--as a class of problems to be solved. Close-range tracking in three dimensions has been performed in air using the Direct Linear Transformation. 1,2 However, this algorithm is not valid when the image-forming rays are bent by refraction at an air/water interface. Although a (non-linear) adaption of the Direct Linear Transformation could be developed, for most purposes the additional complex-ity of such an algorithm would be prohibitive for automated tracking. This paper describes a simple, physical solution to the refraction problem which has been used for several applications involving automated underwater tracking in three dimensions using a video-based motion analysis system.
Based on 20 years of experience in underwater imaging we try to present a generalized description of what the conditions for optical sensors in Swedish waters are like. An evaluation of different optical techniques for observation and inspection purposes is also performed.
Optical imaging is the preferred sensory modality for underwater robotic activities requiring high resolution at close range, such as station keeping, docking, control of manipulator, and object retrieval. Machine vision will play a vital part in the design of next generation autonomous underwater submersibles. This paper describes an effort to demonstrate that real-time vision-based guidance and control of autonomous underwater submersibles is possible with compact, low-power, and vehicle-imbeddable hardware. The Naval Ocean Systems Center's EAVE-WEST (Experimental Autonomous Vehicle-West) submersible is being used as the testbed. The vision hardware consists of a PC-bus video frame grabber and an IBM-PC/AT compatible single-board computer, both residing in the artificial intelligence/vision electronics bottle of the submersible. The specific application chosen involves the tracking of underwater buoy cables. Image recognition is performed in two steps. Feature points are identified in the underwater video images using a technique which detects one-dimensional local brightness minima and maxima. Hough transformation is then used to detect the straight line among these feature points. A hierarchical coarse-to-fine processing method is employed which terminates when enough feature points have been identified to allow a reliable fit. The location of the cable identified is then reported to the vehicle controller computer for automatic steering control. The process currently operates successfully with a throughput of approximately 2 frames per second.
Real-time motion analysis would be very useful for autonomous undersea vehicle (AUV) navigation, target tracking, homing, and obstacle avoidance. The perception of motion is well developed in animals from insects to man, providing solutions to similar problems. We have therefore applied a model of the motion analysis subnetwork in the vertebrate retina to visual navigation in the AUV. The model is currently implemented in the C programming language as a discrete- time serial approximation of a continuous-time parallel process. Running on an IBM-PC/AT with digitized video camera images, the system can detect and describe motion in a 16 by 16 receptor field at the rate of 4 updates per second. The system responds accurately with direction and speed information to images moving across the visual field at velocities less than 8 degrees of visual angle per second at signal-to-noise ratios greater than 3. The architecture is parallel and its sparse connections do not require long-term modifications. The model is thus appropriate for implementation in VLSI optoelectronics.
The ability to "see" underwater is critical to underwater navigation, target detection, target identification, tracking and particularly protection of valuable access against potential saboteurs. From available open literature, it seems that underwater imaging development peaked in the 1960's and early 1970's. Minimal new development has been reported in recent years. Similarly, acoustic lens development peaked at the same time and little or no new information has been published. In this report, the author will review the historical development of acoustic lens; address the development of medical acoustic imaging and other developments which may be utilized by underwater imaging systems.