We present the concept, realization and prototype demonstration of a compact array camera suitable for automotive applications. The multi aperture optics consists of four similar lens objectives and a facetted mirror array in order to select the specific viewing direction of the individual optical channel (field-of-view segmentation). This partly decouples the trade-off between camera thickness and focal length creating a very compact module. Besides this new design, a hybrid optics technology - creating a mix of spherical glass together with aspherical, wafer-level-optics lens elements – is applied in order to achieve the thermal stability needed for the automotive application.
Current mobile devices such as smartphones are expected to contain high-resolution cameras that include a multitude of features in ever slimmer form factors. These cameras are expected to have a wide field of view, image stabilization, fast auto-focus and a sense of depth to enable depth-of-field effects or gesture recognition. However, physical laws limit straight-forward miniaturization of these cameras. <p> </p>A folded multi-aperture design with a segmented field of view and laterally trimmed optics promises to deliver the same performance and features in a package that is half the height of a traditional optics module. Only mature technologies such as injection-molded lenses and voice coil motors are necessary to realize this system. <p> </p>In this work, we present results from our first technological demonstration units. We show that trimming the typical front-stop design lenses of smartphone camera modules only reduces their field of view, without compromises to optical performance in the central field. We discuss how mirror placement affects the behaviour of the system in unexpected ways. Finally, we follow the process of calibrating the demonstration units and outline the challenges in processing the channel images to generate a seamless complete image for all scenes.
Intelligent light management systems are intended to detect the presence of people for enabling a customized lighting control. State-of-the-art systems are based on passive infrared sensors or conventional surveillance cameras. However, they either require a recurring motion within the detected field of view or they are not well-accepted by consumers. <p> </p>We present the optical design of a highly miniaturized, wide-angle camera, which acquires a large field of view of 110° and offers an ultra-thin appearance with a z - height of less than 1 mm. This combination is achieved by means of multiaperture imaging. The implemented demonstrator contains a double-sided, refractive freeform array combined with a baffle array, which prevents optical crosstalk between the multiple optical channels. Both components are aligned in front of a CMOS sensor.
The vast majority of cameras and imaging sensors relies on the identical single aperture optics principle with the human eye as natural antetype. Multi-aperture approaches – in natural systems so called compound eyes and in technology often referred to as array-cameras have advantages in terms of miniaturization, simplicity of the optics and additional features such as depth information and refocusing enabled by the computational manipulation of the system´s raw image data. The proposed imaging principle is based on a multitude of imaging channels transmitting different parts of the entire field of view. Adapted image processing algorithms are employed for the generation of the overall image by the stitching of the images of the different channels. The restriction of the individual channel´s field of view leads to a less complex optical system targeting reduced fabrication cost. Due to a novel, linear morphology of the array camera setup, depth mapping with improved resolution can be achieved. We introduce a novel concept for miniaturized array-cameras with several mega pixel resolution targeting high volume applications in mobile and automotive imaging with improved depth mapping and explain design and fabrication aspects.
In this contribution, a microoptical imaging system is demonstrated that is inspired by the insect compound eye. The array camera module achieves HD resolution with a z-height of 2.0 mm, which is about 50% compared to traditional cameras with comparable parameters. The FOV is segmented by multiple optical channels imaging in parallel. The partial images are stitched together to form a final image of the whole FOV by image processing software. The system is able to acquire depth maps along with the 2D video and it includes light field imaging features such as software refocusing. The microlens arrays are realized by microoptical technologies on wafer-level which are suitable for a potential fabrication in high volume.
Modern applications in biomedical imaging, machine vision and security engineering require close-up optical systems with high resolution. Combined with the need for miniaturization and fast image acquisition of extended object fields, the design and fabrication of respective devices is extremely challenging. Standard commercial imaging solutions rely on bulky setups or depend on scanning techniques in order to meet the stringent requirements. Recently, our group has proposed a novel, multi-aperture approach based on parallel image transfer in order to overcome these constraints. It exploits state of the art microoptical manufacturing techniques on wafer level in order to create a compact, cost-effective system with a large field of view. However, initial prototypes have so far been subject to various limitations regarding their manufacturing, reliability and applicability. In this work, we demonstrate the optical design and fabrication of an advanced system, which overcomes these restrictions. In particular, a revised optical design facilitates a more efficient and economical fabrication process and inherently improves system reliability. An additional customized front side illumination module provides homogeneous white light illumination over the entire field of view while maintaining a high degree of compactness. Moreover, the complete imaging assembly is mounted on a positioning system. In combination with an extended working range, this allows for adjustment of the system’s focus location. The final optical design is capable of capturing an object field of 36x24 mm<sup>2</sup> with a resolution of 150 cycles/mm. Finally, we present experimental results of the respective prototype that demonstrate its enhanced capabilities.
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
Assembly of miniaturized high-resolution cameras is typically carried out by active alignment. The sensor image is constantly monitored while the lens stack is adjusted. When sharpness is acceptable in all regions of the image, the lens position over the sensor is fixed. For multi-aperture cameras, this approach is not sufficient. During prototyping, it is beneficial to see the complete reconstructed image, assembled from all optical channels. However, typical reconstruction algorithms are high-quality offline methods that require calibration. As the geometric setup of the camera repeatedly changes during assembly, this would require frequent re-calibration. We present a real-time algorithm for an interactive preview of the reconstructed image during camera alignment. With this algorithm, systematic alignment errors can be tracked and corrected during assembly. Known imperfections of optical components can also be included in the reconstruction. Finally, the algorithm easily maps to very simple GPU operations, making it ideal for applications in mobile devices where power consumption is critical.
We propose a microoptical approach to ultra-compact optics for real-time vision systems that are inspired by the compound eyes of insects. The demonstrated module achieves 720p resolution with a total track length of 2.0 mm which is about 1.5 times shorter than comparable conventional miniaturized optics. The partial images that are separately recorded in multiple optical channels are stitched together to form a final image of the whole FOV by means of image processing. The microlens arrays are realized by microoptical fabrication techniques on wafer-level which are suitable for a potential application in high volume e.g. for consumer electronic products.
Natural compound-eyes consist of a large number of ommatidia that are arranged on curved surfaces and thus are able to detect signals from a wide field of view. We present an integrated artificial compound-eye sensor system with enhanced field of view of 180° × 60° due to the introduction of curvature. The system bases on an array of adaptive logarithmic wide-dynamic-range photoreceptors for optical flow detection and compound-eye optics for increasing sensitivity and expanding the field of view. Its assembling is mainly done in planar geometry on a flexible printed circuit board. The separation into smaller ommatidia blocks by dicing enables flexibility and finally allows for mounting on curved surfaces. The signal processing electronics of the presented system is placed together with further sensors into the concavity of the photoreceptor array, and facilitates optical flow computation for navigation purposes.
There is a huge demand on miniaturized cameras in the field of mobile consumer electronics. These cameras are currently based on miniaturized single aperture optics. In order to further decrease the thickness of miniaturized camera systems, a multichannel imaging principle needs to be used. These artificial compound eye cameras permit a further decrease in thickness by a factor of two in comparison to miniaturized single aperture optics with same resolution and pixel size. Their fabrication process is currently based on the reflow of photoresist. Due to physical limitations of this technique, only spherical and ellipsoidal surface profiles of the single lenslets are achievable. Consequently, the potential for correcting optical aberrations is restricted leading to limited image quality and resolution. This can be improved significantly by the use of refractive freeform arrays. Due to the non-symmetrical and aspherical surface shapes of the single lenslets, the fabrication by the reflow of photoresist is no longer possible. Therefore, we propose an approach for the fabrication of these structures based on the combination of an ultra-precision machining process together with a microimprinting approach.
Artificial compound eye cameras are a prominent approach of next generation wafer level cameras for consumer
electronics due to their lower z-height compared to conventional single aperture objectives. In order to address low cost
and high volume markets, their fabrication is based on a wafer level UV-replication process. The image quality of
compound eye cameras can be increased significantly by the use of refractive freeform arrays (RFFA) instead of
conventional microlens arrays. Therefore, we present the fabrication of a RFFA wafer level molding tool by a step and
repeat process for the first time. The surface qualities of the fabricated structures were characterized with a white light
Image sensors for digital cameras are built with ever decreasing pixel sizes. The size of the pixels seems to be
limited by technology only. However, there are also hard theoretical limits for classical miniature camera systems:
During a certain exposure time only a certain number of photons will reach the sensor. The resulting shot noise
thus limits the signal-to-noise ratio. On the other hand, diffraction sets another limit for image resolution in case
that there is enough brightness in the scene. In this work we show that current sensors are already surprisingly
close to these limits.
Image sensors for digital cameras are built with ever decreasing pixel sizes. The size of the pixels seems to be limited by technology only. However, there is also a hard theoretical limit for classical video camera systems: During a certain exposure time only a certain number of photons will reach the sensor. The resulting shot noise thus limits the signal-to-noise ratio. In this letter we show that current sensors are already surprisingly close to this limit.
For small camera modules in consumer applications, such as mobile phones or webcams, size and cost are
important constraints. An autofocus system increases both size and cost and can degrade optical performance
by misalignment. Therefore, a monolithic optical system with a fixed focus is preferable for these applications.
On the other hand, the optical system of the camera has to exhibit a very large depth of field, as it is expected
to deliver sharp images for all typical working distances. The depth of field of a camera system can be increased
by using a larger F-number, but this is undesirable due to light sensitivity considerations. On the other hand, it
can also be increased by reducing focal length.
Multi-aperture systems use multiple optical channels, each of them with a smaller focal length than a comparable
single-aperture system. Accordingly, each of the channels has a large depth of field. However, as the
channels are displaced laterally, parallax becomes noticeable for close objects. Therefore, the channel images
have to be shifted accordingly when recombining them into a complete image.
We demonstrate an algorithm that compensates for parallax as well as chromatic aberration and geometric
distortion. We present a very flat camera system without moving parts that is capable of taking photos and
video at a wide range of distances. On the demonstration system, object distance can be adjusted in real time
from 4 mm to infinity. The focus position can be selected during capture or after the images were taken.
Wafer-level optics is considered to yield imaging lenses for cameras of the smallest possible form factor. The high accuracy of the applied microsystem technologies and the parallel fabrication of thousands of modules on the wafer level make it a hot topic for high-volume applications with respect to quality and costs. However, the adaption of existing materials and technologies from microoptics for the manufacturing of millimeter scale lens diameters led to yield problems due to material shrinkage and z-height accuracy. A multi-aperture approach to real-time vision systems is proposed that overcomes these issues because it relies on microlens arrays. The demonstrated prototype achieves VGA (Video Graphics Array, 640×480 pixels) resolution with a thickness of 1.4 mm, which is a thickness reduction of 50% compared to single-aperture equivalents. The partial images that are separately recorded in different channels are stitched together to form a final image of the whole field of view by means of image processing. Distortion is corrected within the processing chain. The microlens arrays are realized by state-of-the-art micro-optical fabrication techniques on wafer level that are suitable for a potential application in high volume, e.g., for consumer electronic products.
Micro-optical systems, that utilize multiple channels for imaging instead of a single one, are frequently discussed for
ultra-compact applications such as digital cameras. The strategy of their fabrication differs due to different concepts of
image formation. Illustrated by recently implemented systems for multi-aperture imaging, typical steps of wafer-level
fabrication are discussed in detail. In turn, the made progress may allow for additional degrees of freedom in optical
design. Pressing ahead with very short overall lengths and multiple diaphragm array layers, results in the use of
extremely thin glass substrates down to 100 microns in thickness. The desire for a wide field of view for imaging has led
to chirped arrays of microlenses and diaphragms. Focusing on imaging quality, aberrations were corrected by
introducing toroidal lenslets and elliptical apertures. Such lenslets had been generated by thermal reflow of lithographic
patterned photoresist and subsequent molding. Where useful, the system's performance can be further increased by
applying aspheric microlenses from reactive ion etching (RIE) transfer or by achromatic doublets from superimposing
two moldings with different polymers. Multiple diaphragm arrays prevent channel crosstalk. But using simple metal
layers may lead to multiple reflections and an increased appearance of ghost images. A way out are low reflecting black
matrix polymers that can be directly patterned by lithography. But in case of environmental stability and high resolution,
organic coatings should be replaced by patterned metal coatings that exhibit matched antireflective layers like the
prominent black chromium. The mentioned components give an insight into the fabrication process of multi-aperture
imaging systems. Finally, the competence in each step decides on the overall image quality.
Multi-aperture imaging systems inspired by insect compound eyes promise advances in both miniaturization and
cost reduction of digital camera systems. Instead of a single lens stack with size and sag in the order of a few
millimeters, the optical system consists of an array of microlenses. At a given field of view of the complete
system, the focal lengths of the microlenses is a fraction of the focal length of a single-aperture system, reducing
track length and increasing depth of field significantly. As each microimage spans only a small field of view, the
optical systems can be simple. Because the microlenses have a diameter of hundreds of microns and a sag of tens
of microns, they can be manufactured cost-effectively on wafer scale and with high precision. However, reaching
a sufficient resolution for applications such as camera phones has been a challenge so far.
We demonstrate a multi-aperture color camera system with approximately VGA resolution (700x550 pixels)
and a remarkably short track length of 1.4 mm. The algorithm for correcting optical distortion of the microlenses
and combining the microimages into a single image is the focus of this presentation.
We propose a microoptical approach to ultra-compact optics for real-time vision systems that are inspired by the
compound eyes of insects. The demonstrated module achieves about VGA resolution with a total track length
of 1.4 mm which is about two times shorter than comparable single aperture optics. The partial images that
are separately recorded in different optical channels are stitched together to form a final image of the whole field
of view by means of image processing. A software correction is applied to each partial image so that the final
image is made free of distortion. The microlens arrays are realized by state of the art microoptical fabrication
techniques on wafer-level which are suitable for a potential application in high volume e.g. for consumer electronic
As a matter of course, cameras are integrated in the field of information and communication technology. It can
be observed, that there is a trend that those cameras get smaller and at the same time cheaper. Because single
aperture have a limit of miniaturization, while simultaneously keeping the same space-bandwidth-product and
transmitting a wide field of view, there is a need of new ideas like the multi aperture optical systems. In the
proposed camera system the image is formed with many different channels each consisting of four microlenses
which are arranged one after another in different microlens arrays. A partial image which fits together with the
neighbouring one is formed in every single channel, so that a real erect image is generated and a conventional
image sensor can be used. The microoptical fabrication process and the assembly are well established and can
be carried out on wafer-level. Laser writing is used for the fabrication of the masks. UV-lithography, a reflow
process and UV-molding is needed for the fabrication of the apertures and the lenses. The developed system is
very small in terms of both length and lateral dimensions and has a VGA resolution and a diagonal field of view
of 65 degrees. This microoptical vision system is appropriate for being implemented in electronic devices such
as webcams integrated in notebookdisplays.
Although several applications of machine vision and biomedical imaging ask for the close-up imaging of extended
object fields, only few, mostly bulky solutions exist. We demonstrate the optical design and realization of an
ultra-compact close-up imaging system with unity magnification. It uses a multi-aperture approach in order to
shorten its total track length to less than 4 mm while achieving a large field of view. The system is made of a stack
of several two-dimensional arrays of refractive microlenses. The potential of this setup lies in the combination of
digital imaging with microoptical fabrication techniques leading to thin optical components which can be directly
attached to an image sensor. Hence, these systems fit into tight spaces and they achieve a high resolution without
The integration of camera modules in portable devices is increasing rapidly. At the same time, their size is shrinking due
to the need for mobility and reduction of costs. For this purpose, an ultra-compact imaging system has been realized,
which adapts the multichannel imaging principle of superposition compound eyes known from nocturnal insects. The
application forms an erect image by using a pair of microlens arrays with slightly different pitches, which is also known
as "Gabor superlens". The microoptical design was optimized by using numerical ray tracing methods with respect to the
capabilities of state-of-the-art microoptics fabrication technology. Additional aperture/diaphragm layers and a field lens
array had to be introduced in order to avoid channel cross talk. As a result, the optical performance is comparable to that
of miniaturized conventional lens modules. However, the fabrication of the microoptical Gabor superlens is kept simple
and scalable in terms of wafer-level technology due to the use of microlens arrays with low sag heights and small
Up to now, multi channel imaging systems have been increasingly studied and approached from various directions
in the academic domain due to their promising large field of view at small system thickness. However, specific
drawbacks of each of the solutions prevented the diffusion into corresponding markets so far. Most severe problems
are a low image resolution and a low sensitivity compared to a conventional single aperture lens besides the lack
of a cost-efficient method of fabrication and assembly. We propose a microoptical approach to ultra-compact
optics for real-time vision systems that are inspired by the compound eyes of insects. The demonstrated modules
achieve a VGA resolution with 700x550 pixels within an optical package of 6.8mm x 5.2mm and a total track
length of 1.4mm. The partial images that are separately recorded within different optical channels are stitched
together to form a final image of the whole field of view by means of image processing. These software tools allow
to correct the distortion of the individual partial images so that the final image is also free of distortion. The
so-called electronic cluster eyes are realized by state-of-the-art microoptical fabrication techniques and offer a
resolution and sensitivity potential that makes them suitable for consumer, machine vision and medical imaging
An artificial compound-eye imaging system has been developed consisting of one planar array of microlenses
positioned on a spacing structure and coupled to a commercial CMOS optoelectronic detector array of different pitch,
providing different viewing directions for the individual optical channels. Each microlens corresponds to one channel,
which can be related to one or more pixels due to the different fill factors of the microlens array and the image sensor.
Also alignment problems resulting from the matching of the microlens focal spots and the pixels during the assembly
and the possible residual rotation between the artificial compound-eye objective and the pixel matrix are considered. We
have written a program to automatically select the illuminated pixels of the sensor which correspond to each channel in
order to form the final image. This calibration method is based on intensity criterions besides the geometric disposition
of the microlens array. An image capture program that uses only the channels selected by the calibration is also
presented. This program additionally implements image post-processing methods adapted to the microoptical
compound-eye sensor. They are applied to the captured images in real time and allow increasing the contrast of the
captured images. One of the methods used is the Wiener filter that is computed by taking into account an approximation
of the multichannel imaging process of microoptical compound-eye sensors. Experimental results are presented, which
show a noticeable increase in the frequency response when the Wiener filter is used, partially compensating the
characteristic low spatial resolution of the artificial compound eyes.
Natural compound eyes combine a small eye volume with a large field of view (FOV) at the cost of comparatively low spatial resolution. Based on these principles, an artificial apposition compound-eye imaging system has been developed. In this system the total FOV is given by the number of channels along one axis multiplied with the sampling angle between channels. In order to increase the image resolution for a fixed FOV the sampling angle is made small. However, depending on the size of the acceptance angle, the FOVs of adjacent channels overlap which causes a reduction of contrast in the overall image. In this work we study the feasibility of using digital post-processing methods for images obtained with a thin compound-eye camera to overcome this reduction in contrast. We chose the Wiener filter for the post-processing and carried out simulations and experimental measurements to verify its use.
Based on previously developed ultra-thin compound eye sensors we propose three new setups for compensating
apparent draw-backs of artificial apposition compound eyes. In detail, either color vision, increased sensitivity or
a system with decreased sensor format is demonstrated by integrating multiple light sensitive pixels within the
footprint of each microlens of this multi-channel configuration. The optical setup is designed that way that either
parallel imaging of each individual object point or a constant sampling of the FOV is achieved with a group
of pixels in each channel. To read out the overall image, different pixels have to be superimposed or stitched
digitally. The amount of information which is gathered in each channel is increased whereas no resolution is
lost compared to a standard artificial apposition compound eye. The optical design, fabrication and also the
experimental verification for the system of superposition type is discussed in detail.
Inspired by the natural phenomenon of hyperacuity, a novel approach has been analyzed that allows to access
highly accurate information with an artificial apposition compound eye despite the number of image pixels is
small. This is achieved by oversampling of the object space due to overlapping fields-of-view of adjacent optical
channels. The first approach uses the knowledge about the impulse response function derived by linear system
theory to calculate the position of objects like point sources and edges from the measured optical powers in
adjacent channels. Therefore, the implementation of a precise position detection with an accuracy increase of
up to 50 times compared to the conventional image resolution is demonstrated. The second approach that is
presented, works in a more general way because it is independent of the exact imaging model. With the help
of the latter, the accuracy of the position detection of an edge was increased by a reproducible factor of 25. As
presented here, the second approach also enables the measurement of a width with sub-pixel accuracy.
The visual revolution triggered by the commercial application of digital image capturing devices generates the
need for new miniaturized and cheap optical imaging systems and cameras. However, in imaging we observe a
permanent miniaturization of elements but always similar optical principles are applied which have been known
to the optical designers for many decades. With the newly gained spectrum of technological capabilities in micro-
optics such as photolithography it is time to exploit completely new imaging principles such as for instance the
microlens array imaging. In this paper we present an overview of our latest developments on: the technology
and image processing of the artificial apposition compound eye, a rotating artificial apposition compound eye
column for panoramic vision, an artificial apposition compound eye on a curved basis and an ultra-short, large
object-size microscope. All the systems have a total track of below or only a few mm in common, while at the
same time having an optical performance comparable to that of the conventional exemplars, e.g. a resolution of
50LP/mm over a field of 4.5mm for the large object-size microscope.