Translator Disclaimer
Open Access Paper
17 September 2019 Measurement principle and arrangement for the determination of spectral channel-specific angle dependencies for multispectral resolving filter-on-chip CMOS cameras
Author Affiliations +
Proceedings Volume 11144, Photonics and Education in Measurement Science 2019; 111440S (2019)
Event: Joint TC1 - TC2 International Symposium on Photonics and Education in Measurement Science 2019, 2019, Jena, Germany
Filter-On-Chip CMOS sensor equipped cameras are a convenient, reliable and affordable approach for the parallel acquisition of spatial and spectral information. The combination of pixel-arranged spectral filter matrices on CMOS sensors increases their integration density and system complexity by several times compared to standard RGB cameras. Due to their system design, these cameras have an increased spectral crosstalk and specific dependencies from the angle of radiation. The paper will show how to develop and set up a measurement arrangement for the characterization of the channel specific spectral sensitivities under different angles of radiation. These characterizations are necessary to develop a more robust model for the camera pixel-value correction to ensure the comparability and reproducibility of the measured values. Therefore, a measurement setup to investigate the influence of the angle of incident radiation on filter-on-chip CMOS sensors was developed. After initial investigations with a setup in which the camera was simply rotated and investigations with a lens and changed f-number confirmed that the angle influence results in a measurable difference in the sensor response, a new measurement arrangement was developed to investigate this behavior more precisely. The developed measurement arrangement allows multispectral resolving image sensors to be radiated with collimated light at reproducible angles of incidence and with adjustable wavelengths. By comparing the measured values with illuminances measured using a calibrated photodiode in the same setup and with the same parameters, it is possible to evaluate the angle dependence based on quantum efficiency curves according to the EMVA 1288 standard. The investigations carried out, the developed principles and the realized semi-automatic measurement arrangement will be shown and explained to characterize the capabilities of multispectral resolving filter-on-chip CMOS sensor equipped cameras for applications in industry and biomedicine.



The spectral resolution of the examined multispectral resolving filter-on-chip cameras is realized by Fabry-Pérot filters applied to a CMOS sensor. In the cameras examined here, 16 different filters are applied to the pixels of the image sensor in a recurring 4x4 mosaic (Figure 1, left). A section through four filters should illustrate the structure of the filters (Figure 1, right). It can be seen that the different filters have different thicknesses, which are called cavity heights. This is due to the way the filters work, which is explained below. In this context, the possible reasons and consequences of the angle dependence of multispectral resolving filter-on-chip CMOS sensors, which can be traced back to the system-related properties of Fabry-Pérot filters, are discussed. Fabry-Pérot filters are based on interference phenomena on thin layers and thus on the wave characteristics of electromagnetic radiation [1]. They consist of two (partially) reflecting layers which are aligned parallel to each other with high precision at a distance L defined by a dielectric, transparent layer. Figure 2 shows the functional principle of a single beam incident at a certain angle θ. The beam is reflected several times between the layers, which leads to interference phenomena due to the superpositions.

Figure 1.

Arrangement of a filter matrix over the sensor surface (right) and Fabry-Pérot filter on CMOS sensor (left)


Figure 2.

Functioning of Fabry-Pérot interferometers according to [2]; L - distance between the (partially) reflecting layers; θ - angle of incidence of radiation


Depending on the refractive index and the thickness of the material between the mirrors, as well as the angle of incidence of the radiation, constructive interferences result for certain wavelengths (1) [1], [3].


Where k stands for the harmonic order of the respective maximum, λ for the wavelength that is amplified or transmitted, n for the refractive index of the material between the filters, and θ for the angle of incidence.

For the spectral filter application, this means that wavelengths at which constructive interference occurs are transmitted by the filter as shown in Figure 2, whereas all other wavelengths are blocked [4]. If the radiation falls vertically (θ = 0°) on the filter, a peak wavelength can be defined by changing the equation (2) at a vertical incidence of light λ (2).


If refractive index n and distance L remain constant, this can be used again in equation (1), so that for the respective maxima of k order with non-perpendicular incidence of radiation a shift of the peak wavelength results (3).


The transmitted wavelength is thus shifted towards shorter wavelengths as the angle of incidence increases. With multispectral resolving filter-on-chip cameras, this effect causes the transmitted wavelengths of the filters to differ depending on the optics used or the aperture number used. In addition, a widening of the transmitted bands at larger aperture diameters (smaller f-number) and thus larger angles of incidence is to be expected [5]. The increase of the Full Width at Half Maximum (FWHM) as a descriptive variable for the transmitted bandwidth is because the radiation from imaging optics does not collimate at a defined angle to the sensor surface, but in a cone around a so-called Chief Ray Angle (CRA) (Figure 4) [5]. It can also be seen that the angle of incidence changes across the sensor surface and continues to rise towards the edge. However, a reduction of the aperture diameter, which results in narrower cones around the CRA, leads to less light hitting the sensor, which must be compensated by higher exposure times.

This leads to increased dark noise and blurring of moving objects. The manufacturer of the multispectral resolving filter-on-chip cameras specifies a f/2.8 as the optimum compromise [5].

Figure 3.

Chief Ray Angle (CRA) and cone of incidence angle as a function of pixel position (D - diameter of lens; f - focal length) [3]


Figure 4.

Possible masking of parts with low cavities by filters with higher cavities at oblique incidence of radiation


In addition to the change in the peak wavelength, a decrease in the transmission of the filters for radiation arriving at a larger angle can also be expected. This is because the radiation is emitted laterally to the edge of the filter when the incidence is not perpendicular [1]. When the Fabry-Pérot filters are arranged on the CMOS sensor, this means that the radiation cannot land at the edge of the filter, but in an adjacent filter element of the monolithic filter matrix (see Figure 1, right). In this case, the result is optical crosstalk to the neighboring pixels, which also increases with increasing angle of incidence. A third angle-dependent disturbance variable can result from the different cavity heights of the filters. If higher cavities are next to lower ones, a part of the lower cavities may be obscured by the higher ones (Figure 4).

Such an influence would manifest itself in a dependence on the direction from which the radiation comes. For this reason, the irradiation of the sensors at known angles of incidence from different directions also seems to make sense to be able to investigate this direction dependency.



This chapter presents the results of research on previous investigations into the angle dependence of cameras or image sensors. The used optical principles shall be emphasized to develop an own experimental setup on this basis.


State of the Art

In [6] and [7], respectively, investigations on the quantum efficiency and optical efficiency of image sensors are presented, whereby, among other things, the influence of the angle of incidence on the sensors is considered. In the first variant, the image sensors are mounted on a turntable as shown in Figure 5, left and irradiated with collimated radiation. As a second variant, a uniformly illuminated surface was imaged on the sensor via an optical system (Figure 5, right). The individual pixels of the sensor surface were thus each irradiated by a radiation cone. In this case, the angle (or angle range) incident on the sensor increases with increasing distance from the center of the sensor. A CRA of 0° is thus present in the center and the largest possible CRA is present at the edge of the sensor.

Figure 5.

Experimental set-ups for investigating the angle dependence of image sensors [7]


In [3] and [5] the angle dependence of the multispectral resolving filter-on-chip camera is pointed out and this is documented by diagrams showing the different filter transmission curves at different aperture numbers. However, the test setups used to determine these curves are not discussed in detail. Because of the determination of these angle dependencies, only a recommendation for an aperture number of f/2.8 is given.


Development Tasks

As described in the previous chapters, for various reasons an angular dependence of multispectral resolving filter-on-chip cameras with monolithically applied Fabry-Pérot spectral filters can be expected. The aim of this thesis is therefore to develop a measuring arrangement that allows a comparison of different image sensors with respect to their angle dependence. The measurement setup to be developed should make it possible to irradiate the sensor with radiation at defined angles of incidence. By varying the direction from which the angle of incidence hits the sensor surface, it should be possible to evaluate the directional dependence of the angle influence. Since the influence on the quantum efficiency of the individual channels as a function of the wavelength of the incident radiation is of interest for the spectrally resolving sensor systems under consideration, it is necessary to tune the radiation in the smallest possible steps over the entire wavelength range of the sensor. The plausibility of the measurements with the measuring system should be based on the evaluation and discussion of a measurement performed and the values determined from it. This should uncover possible inaccuracies and improvement potentials.

To enable a comparability of the results with other measurements, the calculation of values and the measurement methods shall be based on the EMVA 1288 standard [8] as far as possible. In the first step of the design process, the target values are defined which are to be at least fulfilled by the measurement setup. The range in which the angle of incidence on the image sensor moves when a lens is used is determined by the design conditions of the respective camera and lenses. Figure 6 shows the beam path between lens and image sensor using the edge beams and the main beam angle. As can be seen, the incident radiation is not collimated, but always a cone-shaped beam falls on a certain point of the sensor. This angle range changes over the sensor surface. The further away a pixel is from the optical axis of the lens, the greater the Chief Ray Angle (CRA). Since the sensor center is normally located in the optical axis of the lens, the CRA at this point is 0°. Using the diameter of the lens and the distance from the lens to the sensor and the sensor diagonal, maximum CRA and maximum angle of incidence can be calculated with the aperture fully open.

Figure 6.

Incidence angle on image sensors; d0 - diameter of optics; CRA - chief ray angle; ds - sensor diagonal; la - flange focal distance / distance between sensor and optics; θmax - maximum angle of incidence


The diameter of the lens d0 and the distance are given by the C-mount thread of the cameras to be examined. The flange focal distance Ia, i.e. the distance from the sensor surface to the end of the thread, is 17.526 mm and the thread diameter d0 is 25.4 mm [9]. If an iris diaphragm is used, the effective diameter of the d0 optics is reduced by closing the diaphragm to reduce the amount of incident radiation. This is done by reducing the incident radiation cone and thus also by reducing the existing angle range.

These values are used for simplification here to calculate the maximum angles of incidence, as these are standardized sizes. The distance to the optics is somewhat smaller due to the screw-in depth of the thread (which increases the angle) and the lens diameter is smaller due to the edge of the lens mount than that of the C-mount thread (which in turn reduces the existing angle of incidence). With the existing sensor diagonal ds the existing angle of incidence can be calculated (4), (5).


Note: Investigated Camera/Sensor is Snapshot-Mosaic 4x4; CMOSIS CMV2000, Pixelsize = 5,5 x 5,5 μm2; Sensor Width = 2048 px; Sensor Height = 1088 px → Sensor Diameter ds = 12,74 mm [10]

However, the maximum angle of incidence that can be set with the measuring setup is limited by a further requirement. The camera should be mounted using a sleeve that is attached to the camera using a C-mount thread instead of a lens. By simply inserting the sleeve into a receptacle, it should be as easy as possible to exchange it for a photodiode with such a sleeve, for example, which is necessary for determining quantum efficiency, among other things. This sleeve limits the maximum angle of incidence (6).


Note: di,h = inner diameter of the sleeve and Ih = length of the sleeve

Figure 7 shows that at larger angles only part of the sensor is illuminated. If, for example, only one half of the sensor is to be illuminated (Figure 7, right), the following maximum angle of incidence can be achieved (7).


Figure 7.

Limitation of the maximum possible angle of incidence by C-Mount sleeve; left: Radiation of the entire sensor surface; right: Radiation of the half sensor surface; di,h - Inside diameter of the sleeve; bs - Sensor width; Ih - Length of the sleeve


The step size for the angle change is set to at least 1° to be able to represent as continuous a course of the angle influence as possible. The minimum step size for changing the wavelength is determined by the radiation system used. With the available system, the minimum adjustable step size is 1 nm. According to the calibration protocol of the camera manufacturer, the smallest FWHM of a filter channel in the examined camera is approx. 6.5 nm [11].

Thus, the step size of the radiation is enough to display the progression of values such as quantum efficiency in a sufficiently detailed resolution. According to the EMVA 1288 standard, only a step size smaller than or equal to 2 FWHM is specified, which is clearly observed [8]. According to the EMVA 1288 standard, the wavelength range should be from 350 nm to 1100 nm. However, since the camera used is equipped with a bandpass filter for the 450 nm to 650 nm range, only this range must be considered.



Snapshot-mosaic filter-on chip CMOS sensors show an angle dependency from the incident radiation. This must be characterized and therefore a measurement arrangement and value interpretation was developed. To circumvent the influences of the accuracy of moving components, a design is developed that is independent of positioning errors. This is achieved by using an arrangement of pinholes, only one of which is radiated at a time. Behind this pinhole array, the beam is deflected by a lens depending on the position of the radiated aperture. The positioning of the radiation source must therefore, only be precise enough to always irradiate only the desired aperture. If this is successful, no further influence of the positioning accuracy of the radiation source is to be expected. If the lens is mounted at f (focal length) from the aperture array as shown in Figure 8, it acts as a collimator for the radiation through the apertures. If the radiation source is not on the optical axis of the lens, this results in an angle dependency on the distance between optical axis and radiation source at which the collimated radiation leaves the lens. The angle of incidence θ can be calculated (8).


Where f is the focal length of the lens and a is the distance from the optical axis.

However, the implementation of this structure shows that this only works if the light diffusely leaves the respective apertures of the array. If, for example, an aperture array made of a simple sheet metal (0.1 mm) is used, the light falling on the sensor is not homogeneous. To achieve a homogeneous illumination of the image sensor, the use of a diffuser is necessary. This function is performed by the matt side of the glass pane, on which the aperture array is applied in the form of a coating. The physically realized structure with which this principle is realized is shown in Figure 9. In addition to the previously described components of the principle shown in Figure 8, a focusing lens (Figure 9, No. 3) is inserted between radiation source (Figure 9, No. 1) and aperture array (Figure 9, No. 4).

Figure 8.

Principle of measurement arrangement for Pinhole-Method


Figure 9.

Components of measurement arrangement for Pinhole-Method


This reduces the radiation point (or circle) hitting the pinhole array and increases the intensity of the radiation falling on the respective aperture. With 4 mm between the individual apertures of the array and a diameter of 7 mm at the output of the optical fiber, only a very small distance between the aperture array and the fibre end would be permissible without this focusing in order not to also illuminate adjacent pinholes/apertures through the radiation cone. Thus, the use of this focusing lens also facilitates the positioning of the radiation source. Radiation source and aperture array each have 2 f (double focal length) to the lens to achieve focusing. Due to the focusing lens, the movement of the radiation point on the aperture array is always opposite to the movement of the linear table. If the distance between lens and aperture array and between lens and radiation source is not exactly 2 f of the focusing lens, the step size for moving between the apertures also deviates from the distance between them.

The yz-positioning is partially automated in the realized setup. In detail, this means that the y-direction can be controlled by a motorized linear axis via the Matlab measuring program and the movement in the z-direction must be done manually. When carrying out the tests, individual test series with a fixed z-position and variable y-position are carried out. The number of adjustable angles per test series and the number of test series per focal length of the angle-determining lens (Figure 9, No. 5) is determined by the number of apertures present in the aperture array. The used array consists of 5x5 pinholes with a diameter of 0.2 mm and a center distance of 4 mm to each other in y- and z-direction. With this aperture array, a total of five different distances to the center can be set, as Figure 10 should explain. All other pinholes also have one of the distances shown with the amounts 4 mm, 5.66 mm, 8 mm, 8.94 mm and 11.31 mm. From these distances from the center, which should be in the optical axis of the angle determining lens, the angle of incidence θ results according to equation (8).

Figure 10.

Pinhole array; Distances a of the different pinholes to the center of the array whose center should be in the optical axis


A prerequisite for this is that the angles at the different apertures are the same, however, that the middle pinhole lies exactly in the optical axis of the lens and that the array is aligned straight. Whether the array is straight can be checked by simply moving one of the linear axes. If, for example, the radiation point “jumps” upwards or downwards when moving the y-axis, the array is rotated around the x-axis and must be realigned. A coarse alignment of the center of the aperture array in the optical axis can be done after or during the alignment of the angle-determining lens behind the array in the light direction. To align this lens, i.e. to adjust its position in the x-direction, it is set up in such a way that the aperture array lies exactly in its focal point. To do this, it is first inserted approximately into the assembly based on its focal length. Then a measuring standard is attached to the rider with the camera mount (Figure 9, No. 6).

When the radiation is switched on, a circle is displayed. If its size changes when the tab is moved, the lens is not yet in the correct position and must be moved further. If the circle becomes larger when the rider moves towards the lens or if there is a pixel at which the diameter of the circle is minimal (theoretically equal to the aperture diameter), the lens is too far away from the aperture array. If the circle becomes larger as the distance from the lens increases, the lens is too close to the aperture array and must be moved a little further.

The alignment of the middle aperture of the array can also be checked with this procedure. As before, the middle aperture must be illuminated and the tab with the camera holder moved in the x-direction. If the position of the imaged circle changes during the movement, the array must be moved in the direction in which the circle moved when approaching the lens. Since the refraction of conventional lenses depends on the wavelength used, the actual angle of incidence would change with different wavelengths (longitudinal chromatic aberration). If the radiation had a shorter wavelength, a larger angle of incidence would be available with the same test arrangement than with larger wavelengths. Since the angle influence is to be determined over the entire spectral range of the respective sensor, this circumstance would lead to interference influences on the measurement result. The longitudinal chromatic aberration can be avoided as far as possible by using an achromatic lens, since these lenses have a focal length that is significantly more independent of wavelength. Achromatic lenses, or rather lens systems, usually consist of two cemented lenses with different Abbe numbers, i.e. different color dispersion. By the dispersion lens set on the collecting lens with larger but opposite color dispersion, the longitudinal chromatic aberration can be corrected as far as possible, so that the angle of incidence on the image sensor remains constant over all wavelengths [12]. Achromatic lenses with focal lengths f = 60 mm and f = 90 mm are used, resulting in the following angles of incidence (Table 1).

Table 1.

Distances of the pinholes shown in Figure 10 from the y- and z-axis, focal lengths and the resulting angles of incidence

dy [mm]dz [mm]a [mm]f [mm]0 [°]

Directly behind the achromatic lens, the tab with the camera holder is positioned, into which a photodiode can be inserted in addition to the camera. The used diode of type “Texas Instruments OPT 101” can be seen Figure 9, No. 6. According to the data sheet [13], the influence of the angle of incidence is negligible in the range given here. This is different regarding the dependence on the wavelength of the incident radiation. Calibration data or a correction function are available for the diode, which can be integrated into the Matlab measurement sequence. In addition to the wavelength of the incident radiation, which is set on the monochromator, it also considers the dark signal and the offset of the sensor. This is averaged over 100 measured values before the measurement with the radiation source switched off.

After determining the offset of the photodiode, the respective measurement sequence (constant focal length and z-position; variable wavelength and y-position) is run through with the photodiode used to determine the irradiance E (λ, θ) at the point where the image sensor will subsequently be located. To do this, 100 individual values are recorded with the diode, a correction is made based on the wavelength dependence of the diode using calibration data and the irradiance is determined from this. After all measurements have been carried out with the photodiode, the camera to be examined is used in its place. With this camera, images are first taken in darkness to determine the dark noise or the offset μy,dark. This is primarily dependent on the selected exposure time texp, so that for measurements with different exposure times as many offsets μy,dark must be recorded as there are exposure times. The image sensor is then irradiated with the same wavelengths and angles as the photodiode so that a direct comparison can be made between E and μy, e.g. on the basis of the calculated quantum efficiency η(λ, θ). The determination and calculation of values such as dark noise μy,dark, mean grey value μy and quantum efficiency is based on the EMVA 1288 standard [8], which is also used for the designations and notations of the measured values. The determination of the mean gray values and mean gray values μy,dark in darkness is performed from a pair of images, i.e. two successively acquired images A and B according to equation (9).


M and N describe the image size in pixels, which lie in the considered area (ROI - Region of Interest) and m and n stand for the respective row and column, in which a pixel is located. The calculation of the quantum efficiency according to EMVA 1288 standard includes, as can be seen in equation (10), illuminance E, grey value at darkness μy,dark and mean grey value μy as well as other variables which must be determined in advance.


K stands for the overall system gain, and μp for the average number of photons, which includes the irradiance E, the area of a pixel A, the exposure time texp, the wavelength λ as well as the Planck constant h and the velocity of light c (11).


The system gain K is a constant value and is determined using the photon transfer curve (PTK). The PTK represents the standard deviation 00036_PSISDG11144_111440S_page_8_4.jpg as a function of the mean grey value (μy – μy,dark) (12).


To determine the PTK, the sensor is irradiated in the wavelength of the filter band to be investigated and an image pair is acquired in each case with the exposure time increasing in uniform steps. The radiation is directed towards the middle pinhole, i.e. the radiation falls vertically on the sensor. As the exposure time texp increases, σy and μy rise until the sensor reaches the saturation range μy,sat above which the standard deviation decreases again. The factor K is then calculated from the slope of the linear part of the curve up to 0.7 μy,sat.


Thus, after the previously described determination of the offset of the photodiode in darkness, all wavelengths of the spectrum that can be imaged by the camera to be examined are passed through one after the other in a given step size and the irradiance is measured in each case. The values determined for E are collected together with the respective parameters (angle, wavelength). This procedure is repeated for all angles of the measurement series. The photodiode is then replaced by the camera and the sequence is repeated. After each image pair has been captured, the images are also split directly into their individual filter bands and the two resulting image stacks are saved. This allows further evaluations to be carried out later, such as crosstalk analysis.

The calculation of the mean gray values μy and the calculation of the quantum efficiency η is done directly after saving the image stacks. These values are also added to the table with the irradiances and parameters before the next wavelength or angle is set. After the wavelengths and angles have passed through, the table is saved as a. mat file and can be exported to an Excel file for evaluation. Thus, after the measurement, η(λ, θ)-diagrams can be created very quickly, which allow an evaluation of the angle dependency. Since measurement with photodiode and measurement with camera are not carried out simultaneously during this procedure, temporal fluctuations of the radiation may influence the measurement results. With a parallel radiation of diode and image sensor this deviation would not exist, which would be possible by using a 50T/50R beam splitter. However, since the transmission-reflection ratio of beam splitters is angle-dependent, and the incident light is only exactly halved for a vertical incidence, the use here is not possible or useful. However, the heating phase for camera and radiation source described above prior to the start of measurements is intended to minimize at least the temperature-related temporal fluctuations. A temporal influence would manifest itself as a random error of the measurement results, which can be determined by multiple measurements.



To be able to recognize and evaluate the differences and influences, different display modes and evaluation methods are necessary. For example, the characteristic curves of a single filter band can be directly compared with each other at different angles of incidence (Figure 11).

Figure 11.

Measurement results single channel for Pinhole-Method


In Figure 11 the recorded filter characteristics of a filter band at different angles of incidence are compared. All values are recorded with the achromatic lens with a focal length of f = 60 mm. It can be clearly seen that as the angle of incidence increases, there is an increasing shift of the maximum in the direction of shorter wavelengths. This observation can also be made with all other recorded data. The reason for this lies in the general functionality of Fabry-Pérot filters as they are applied to the sensor to realize different spectral bands. The harmonics of the filters i.e. the transmission maxima, shift towards shorter wavelengths in oblique radiation incidence.

In addition to the magnitude and position (or wavelength) of a quantum efficiency maximum, the FWHM around such a maximum also represents an important descriptive quantity. It provides information about the bandwidth of the filter transmission peak. To describe a quantum efficiency peak, only these values are given instead of an entire filter curve to simplify matters. To describe the shift of the peak due to the oblique incidence of radiation, the magnitude of the quantum efficiency can also be omitted. Thus, one receives representations, as in Figure 12, which give information about the course of the harmonics with changing angles of incidence. A difference between the change of the limits of the FWHM and that of the transmission maximum cannot be recognized from these curves. When drawing the peak wavelength from the calibration protocol of the manufacturer (Figure 12, blue), the peak at small angles of incidence and perpendicular incidence is, contrary to expectations, above the specified wavelength.

Figure 12.

Measurement results single channel for Pinhole-Method; wavelength shift of peak wavelength


Due to the collimated radiation, the peak occurs at higher wavelengths than expected and with increasing angles the peak wavelength decreases in approximately parabolic form (Figure 12, orange) until it falls below the manufacturer’s specification.



The aim of this work was to develop a measurement setup to investigate the influence of the angle of incident radiation on image sensors, more precisely on multispectral resolving filter-on-chip CMOS sensors. After initial investigations with a setup in which the camera was simply rotated confirmed that the angle influence is of an order of magnitude that results in a measurable difference in sensor response, a new set-up was developed to investigate this behavior more precisely.

This allows image sensors to be illuminated with collimated radiation at reproducible angles of incidence and with adjustable wavelengths. By comparing the measured values with radiations measured using a photodiode in the same setup and with the same parameters, it is possible to evaluate the angle dependence based on quantum efficiency according to the EMVA 1288 standard. The measurements performed show an average shift of the peak wavelength of the sensors by approx. 5 nm in the negative direction, i.e. blue, with a change in the angle of incidence from 0° to 10.7°. A correction formula was also derived from the determined relationship between the angle of incidence and the shift of the peak wavelength. However, this only applies to collimated radiation.

In order to use it for image acquisition, it is therefore necessary to know the CRA and the cone of angles of incidence at each point of the image sensor for which the correction would have to be adjusted separately. These values would have to be determined separately for each lens used and after each change to the aperture or focus, which would require considerable effort and was no longer part of the objective of this work. Modeling and correction for the distribution of incident angles for an ideal finite aperture is described in [14].

A generalized model and practical method to estimate model parameters and a correction of undesired shifts in measured spectra for the common case of vignetted aperture is proposed in [15]. The most important cause for the angle dependence of the camera system under investigation is identified as the functionality of Fabry-Pérot filters on Filter-on-Chip CMOS sensor systems. The angular dependence of the peak wavelength is conditioned in principle by these interference filters and is even exploited in other applications to selectively change the maximum transmission of the filters. With other disturbance variables, such as the optical crosstalk caused by the radiation “passing” into adjacent filters, which were described in the consideration of the theoretical principles, no connection with the angle of incidence was discernible. The influence of the direction from which the radiation hits the sensor at an angle cannot yet be conclusively assessed on the basis of previous results. The order of magnitude in which a possible direction dependency of the sensor systems lies, however, will be clearly smaller than that of the angle of incidence, as the previous results suggest. In order to be able to determine reliable data on the influence of the angle of incidence, a more precise adjustment of the existing angle of incidence, e.g. by a wavefront measurement, is necessary.



Macleod, H. A., “Thin-film optical filters,” Taylor & Francis distributor, Boca Raton, London (2010). Google Scholar


Köhler, F., “Untersuchungen zu Fabry-Pérot Filterfeldern: Herstellung mittels Nanoimprinttechnologie, experimentelle Charakterisierung und Anwendungen,” Dissertation, Kassel (2010). Google Scholar


Agrawal, P., “Characterization of VNIR Hyperspectral Sensors with Monolithically Integrated Optical Filters,” Image sensor and Imaging systems, San Francisco (2016). Google Scholar


Charle W., Technical report SSET / CMORES, Löwen (2015). Google Scholar


Catrysse P. B., “QE reduction due to pixel vignetting in CMOS image sensors,” in Proc. SPIE 3965, Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications, (2000). Google Scholar


Catrysse P. B., “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am. A, 19 1610 –1620 (2002). Google Scholar


European Machine Vision Association, “EMVA 1288 Standard for Characterization of Image Sensors and Cameras,” (2019) . 07 ). 2019). Google Scholar


STEMMER IMAGING AG, “Kameraanschluss (Mount),” (2019) . 07 ). 2019). Google Scholar


ams AG, “CMV2000 2MP global shutter CMOS image sensor for machine vision,” (2019) . 07 ). 2019). Google Scholar


XIMEA GmbH, “Sensor Calibration File - CMV2K-SSM4x4-470_620-x.x.x.x,” Münster,2016). Google Scholar


Witt, V., “Wie funktionieren Achromat und Apochromat? Teil 1: Von der Einzellinse zum Achromaten,” Sterne und Weltraum, 72 –75 (2005). Google Scholar


Texas Instruments, “Monolithic Photodiode and Single-Supply Transimpedance Amplifier datasheet (Rev. B),” Dallas,2015). Google Scholar


Goossens, T., “Finite aperture correction for spectral cameras with integrated thin-film Fabry–Perot filters,” Appl. Opt, 57 7539 –7549 (2018). Google Scholar


Goossens, T., “Vignetted-aperture correction for spectral cameras with integrated thin-film Fabry–Perot filters,” Appl. Opt, 58 1789 –1799 (2019). Google Scholar
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
P.-G. Dittrich, M. Bichra, C. Pfützenreuter, M. Rosenberger, and G. Notni "Measurement principle and arrangement for the determination of spectral channel-specific angle dependencies for multispectral resolving filter-on-chip CMOS cameras", Proc. SPIE 11144, Photonics and Education in Measurement Science 2019, 111440S (17 September 2019);

Back to Top