Monocentric lenses allow high resolution panoramic cameras, where imaging fiber bundles transport the hemispherical image surface to conventional focal planes. Refraction at the curved image surface limits the field of view coupled through a single bundle of straight fibers to less than ±34°, even for NA 1 fibers. Previously we have demonstrated a nearly continuous 128° field of view using a single lens and multiple adjacent straight fiber-coupled image sensors, but this imposes mechanical complexity of fiber bundle shaping and integration. However, a 3D waveguide structure with internally curved optical fiber pathways can couple the full continuous field of view onto a single focal plane. Here, we demonstrate wide-field imaging using a monocentric lens and a single curved fiber bundle, showing that the 3D bundle formed from a tapered fiber bundle can be used for relaying a 128° field of view from a curved input to the planar output face. We numerically show the coupling efficiency of light to the tapered bundle for different field of views depends on the taper ratio of the bundle as well as center of the curvature chosen for polishing of the fiber bundle facet. We characterize a tapered fiber bundle by measuring the angle dependent impulse response, transmission efficiency and the divergence angle of the light propagating from the output end of the fiber.
High resolution, wide field-of-view and large depth-of-focus imaging systems are greatly desired and have received much attention from researchers who seek to extend the capabilities of cameras. Monocentric lenses are superior in performance over other wide field-of-view lenses with the drawback that they form a hemispheric image plane which is incompatible with current sensor technology. Fiber optic bundles can be used to relay the image the lens produces to the sensor's planar surface. This requires image processing to correct for artifacts inherent to fiber bundle image transfer. Using a prototype fiber coupled monocentric lens imager we capture single exposure focal swept images from which we seek to produce extended depth-of-focus images. Point spread functions (PSF) were measured in lab and found to be both angle and depth dependent. This spatial variance enforces the requirement that the inverse problem be treated as such. This synthesis of information allowed us to establish a framework upon which to mitigate fiber bundle artifacts and extend the depth-of-focus of the imaging system.
In multiscale imagers a single objective lens is shared by multiple secondary optical systems, so that a high-resolution
wide-angle image is acquired in overlapping fields sensed by multiple conventional focal planes. In the “AWARE2” 2
Gigapixel imager, F/2.4 optics cover a 120 degree field of view using a monocentric glass primary lens shared by 221
molded plastic subimagers, each with a 14 Megapixel focal plane. Such imagers can independently focus parts of the
image field, allowing wide-angle imaging over relatively close and deep image fields. However, providing hundreds of
independent mechanical focus adjustments has a significant system impact in terms of complexity, bulk, and cost. In this
paper we explore the use of an electronically controlled liquid crystal lens for focus of multiscale imagers in general, and
demonstrate use with the AWARE2 imager optics. The Lens Vector Auto Focus (LVAF) liquid crystal lens provides up
to 5 diopters of optical power over a 2.2mm aperture diameter, the maximum currently available aperture. However, a
custom lens using the same materials and basic structure can provide the 5 diopters power and 6.4 mm aperture required
to obtain full resolution overlapping image fields in the AWARE2 imager. We characterize the LVAF lens and the
optical performance of the LVAF lens in the current AWARE2 prototype, comparing the measured and optically
modeled resolution, and demonstrating software control of focus from infinity to an 2m object distance.