An artificial compound-eye imaging system has been developed consisting of one planar array of microlenses
positioned on a spacing structure and coupled to a commercial CMOS optoelectronic detector array of different pitch,
providing different viewing directions for the individual optical channels. Each microlens corresponds to one channel,
which can be related to one or more pixels due to the different fill factors of the microlens array and the image sensor.
Also alignment problems resulting from the matching of the microlens focal spots and the pixels during the assembly
and the possible residual rotation between the artificial compound-eye objective and the pixel matrix are considered. We
have written a program to automatically select the illuminated pixels of the sensor which correspond to each channel in
order to form the final image. This calibration method is based on intensity criterions besides the geometric disposition
of the microlens array. An image capture program that uses only the channels selected by the calibration is also
presented. This program additionally implements image post-processing methods adapted to the microoptical
compound-eye sensor. They are applied to the captured images in real time and allow increasing the contrast of the
captured images. One of the methods used is the Wiener filter that is computed by taking into account an approximation
of the multichannel imaging process of microoptical compound-eye sensors. Experimental results are presented, which
show a noticeable increase in the frequency response when the Wiener filter is used, partially compensating the
characteristic low spatial resolution of the artificial compound eyes.
Natural compound eyes combine a small eye volume with a large field of view (FOV) at the cost of comparatively low spatial resolution. Based on these principles, an artificial apposition compound-eye imaging system has been developed. In this system the total FOV is given by the number of channels along one axis multiplied with the sampling angle between channels. In order to increase the image resolution for a fixed FOV the sampling angle is made small. However, depending on the size of the acceptance angle, the FOVs of adjacent channels overlap which causes a reduction of contrast in the overall image. In this work we study the feasibility of using digital post-processing methods for images obtained with a thin compound-eye camera to overcome this reduction in contrast. We chose the Wiener filter for the post-processing and carried out simulations and experimental measurements to verify its use.
The effect of the 2D structured noise on the post-processing of images in hybrid optical-digital imaging systems is studied on the basis of the Wiener restoration filter. 2D structured noise is modeled as an additive noise that has the same random value along a row or a column in the image. The restoration is carried out with the Wiener filter in an unsupervised way by the use of well established procedures to determine the filter constant as a function of the noise power. We show that the classical Wiener filter is not satisfactory for the case of systems affected by 2D noise and we conclude that this is caused by an overdetermantion of the 2D noise in the procedure to find the filter constant. From this conclusion we propose a new filter based on the separability of the Optical Transfer Function of the optical system that depends on two constants, one for each principal direction of the 2D noise. Furthermore, we define a procedure for the unsupervised determination of these constants and we evaluate the quality of the restoration obtained by this procedure.
We present an educational resource based in an optical software package for undergraduate students. It consists in a web based textbook with several applets for illustrating the theory and simplify the teaching tasks in the classroom. These programs are also used as a method for self-learning in an on-line environment. Applets are written in Java language using the Java Network Launching Protocol (JNLP) for avoiding problems related with the use of specific browsers or java interpreter's versions.
We analyze the performances of the most known phase filter design (the cubic phase plate) in wavefront coding systems with respect to on- and off-axis imaging. To this end, the PSF will be calculated at different off-axis positions and the contribution of coma and astigmatism aberration terms to its spatial variation will be evaluated. The study will include the subsequent digital image processing procedure as well, so that a clear idea of the overall system performance will be drawn.
We analyze the behavior of complex information in the Fresnel domain, taking into account the limited capability to display complex values of liquid crystal devices when they are used as holographic displays. To do this analysis we study the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution. We also use the information adjusted with a method that combines two configurations of the devices in an adding architecture. The results of the error analysis show different behavior for the reconstructions when using the different methods. Simulated and experimental results are presented.
We present an educational resource based in a virtual optical laboratory for undergraduate students. It consists in a web-based textbook with several applets to illustrate the theory and simplify the teaching tasks in the classroom. These programs can also be used as a method for self-learning in an on-line environment. Applets are written in Java language using the Java Network Launching Protocol (JNPL) for avoiding problems related with the use of specific browsers or Java interpreters versions.
In this work we analyze the behavior of complex information in Fresnel domain taking into account the limited capability to display complex transmittance values of current liquid crystal devices, when used as holographic displays. In order to do this analysis we compute the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution (real and imaginary parts, amplitude and phase) as well as using the full complex information adjusted with a method that combines two configurations of the devices in an adding architecture. The RMS error between the amplitude of these reconstructions and the original amplitude is used to evaluate the quality of the information displayed. The results of the error analysis show different behavior for the reconstructions using the different parts of the complex distribution and using the combined method of two devices. Better reconstructions are obtained when using two devices whose configurations densely cover the complex plane when they are added. Simulated and experimental results are also presented.
Optical correlators process two-dimensional images that come from a three-dimensional world. Filters designed for object recognition of three-dimensional scenes must have the information of all possible views. This implies a large quantity of filters, especially when the object is moving with respect to the observer. Although filters designed through the synthetic discriminant functions formalism can encode information of several images, there is a practical limit imposed by the noise appearing at the correlation plane. Fast correlators are one way of solving this problem. In this work we propose a global process for detecting 3-D objects based on fast sequential correlations with filters derived from the different possible views of the target. The acquisition of these views is accomplished in a fast and simple way by means of a three-dimensional scanner based on stereovision techniques. The 3-D model of the object thus obtained is then used to compute synthetic plane views from any desired viewpoint. A compact correlator has been developed which uses fast CCD cameras for input and output, and ferroelectric SLMs (spatial light modulators) to display the scene and the sequence of filters. The process of digitizing the 3-D coordinates is described in detail, from the acquisition of the stereopair of images, the stereo-matching algorithm we use and the final integration of all data sets into a common object-centered coordinate system. Also, general engineering problems involved in the design and construction of the correlator are analysed and discussed.
This work presents a 3D scanner system based on stereovision techniques to generate plane views of an object from an arbitrary viewpoint. These views are used as the reference templates in an optical correlator system designed to recognize the object.