## 1.

## Introduction

Visual inspection for a highly reflective surface is commonly faced with a serious limitation which is that useful information on geometric construction and textural defects is covered by a blur due to specular highlights. With saturating corresponding charge coupled device (CCD) pixels, strong highlights increase the gray value of neighbors in an imaging sensor. The aforementioned phenomenon that reduces contrast and causes image blur is a parasitic image. In the visual system, a parasitic image arises from direct lights on a highly reflective surface (solid line with arrow), scattering glare in the lens element (dash and dot line), stray lights in the camera body (short dash line), and reflection off the lens surface (long dash line), as shown in Fig. 1. In our experiment, these sources causing a parasitic image are mainly scattering glare and direct light. The sum of the two sources saturates image point ${P}_{i}$ and increases the light intensity of its surrounding pixels. As shown in Fig. 2, scattering glare, which appears as a parasitic image, obscures the edge of the metallic slice in the presence of highlights. Therefore, it is necessary to remove the parasitic image created by specular highlights and capture realistic scenes.

Many specularity removal techniques have been proposed: color space analysis; neighborhood analysis; polarization; image sequences (IS); multiple-flash images, etc.^{1}2.3.4.5.6.7.^{–}^{8} However, when scattering glare is in the neighboring region, these techniques are no longer valid. There are many methods to remove various parasitic images. Jaehyun et al.^{9} propose a multiexposure image fusion algorithm without a ghost effect. Schechner et al.^{7} introduce an approach that can avoid saturation of highlights and improve the quality of such images, in which multiple light sources simultaneously illuminate the object from different directions. Agrawal et al.^{8} present a novel gradient projection scheme that allows removal of reflections and highlights from flash images and uses a flash image and ambient image pair to produce better flash images. Bitlis et al.^{10} propose a shift variant analytical parametric model to reduce stray light effects in a digital camera. Liebe et al.^{11} analyze sun-induced veiling glare. These methods can be broadly classified as illumination techniques, multiexposure imaging, high dynamic range (HDR) camera, and a software algorithm. Multiexposure imaging will take a long time to implement photometric evaluation, spectral calibrations, and image reconstruction.^{12} Illumination strategies, which are complex and various, cannot always completely acquire full information of the measured work piece. Although an HDR camera can be used to raise the saturation point by increasing the capacity of the sensor electron well, producing large sensors is excessively expensive and reduces sensor resolution. On average, only a small portion of a scene contains strong highlights and therefore needs high capacity sensors. Software algorithms postprocess an image that already contains parasitic components. As the highlight due to the specular reflection of a metallic slice is very strong, the performance of software algorithms is poor.

This article is inspired by previous works on computational camera as follows. Nayar et al.^{13} describe a programmable imaging system that uses a digital micromirror device (DMD) to alter the geometric and radiometric characteristics. Ri et al.^{14} propose phase-measuring profilometry using a DMD camera to extend the intensity range. Ankit et al.^{15} present an optical relay system for mechanical or electronic color spectrum control that utilizes a DMD in the optical path to modulate space lights. Adeyemi et al.^{16} demonstrate a system that uses precise DMD control of the projector to enhance the dynamic range.

In this article, we have implemented a programmable imaging system that goes by the name of DMD Camera. A method for removal of a parasitic image and elimination of high reflection based on the DMD camera is presented. With this system, we can decrease the intensity of a scene ray based on the needs of the application before it reaches the imaging optics. This article explains the space light modulation (SLM) strategy for inspection of a metallic slice in detail. Finally, we demonstrate the effectiveness of a correction image and a parasitic image.

## 2.

## Prototype System

Here, we describe programmable imaging system with a micromirror array. The system is composed of a CCD, DMD, image processor, and two imaging lenses (Len1 and Len2), as shown in Fig. 3. To avoid unexpected stray light caused by devices ahead of the DMD, the optical system is open. DMD is a two-dimensional (2-D) array optoelectronic element in which every pixel has two stable mirror states ($+12\text{-}\mathrm{deg}$ tilt and $-12\text{-}\mathrm{deg}$ tilt) to control the direction of the scene ray with high precision over space and time. A PC, as the imaging processor, handles the camera image and controls the DMD pattern. The object, which is a Neodymium magnet slice in this article, is imaged at the DMD plane by Len2. Len1 focuses the reflected ray from the DMD to the CCD plane. The DMD, mounted on the intersection of two optical axes, modulates the incident ray from the object and reflects the processed ray to the CCD by 24-deg. Based on the working principle of the DMD, the reflected ray from the DMD is produced by pulse width modulating the mirror elements over the operating refresh time. Thus, the reflected intensity gray level is proportional to the period of time that the mirror is on $+12\text{-}\mathrm{deg}$ tilt state. As the CCD receives different exposure times, the object image will be modulated by the DMD.

## 2.1.

### Mapping from Digital Micromirror Device to Charge Coupled Device

The DMD and CCD are both perpendicular to the primary optical axis of Len1 that is composed of five lenses. ${\beta}_{1}$ denotes paraxial magnification of Len1 from 0.5 to 2. Note that there are three possibilities for mapping from DMD to CCD: one DMD pixel being assigned to multiple CCD pixels, one DMD pixel being assigned to one CCD pixel, and multiple DMD pixels being assigned to one CCD pixel. The mapping of one DMD pixel being assigned to one CCD pixel is implemented in this article. Pixel-to-pixel correspondence is accurately adjusted by utilizing Shien’s method.^{17} The mapping has three steps as follows: first, we can control the DMD to display a checkerboard pattern. These corner coordinates $(u,v)$ of the checkerboard are already known. Second, the CCD captures the corresponding image corners $(x,y)$ which are imaged by Len1. Finally, the camera matrix $H$, which represents a spatial relationship between the CCD and DMD, is calculated by utilizing RANSAC algorithm. The threshold value for determining when a datum fits a model is set by 0.05 pixels.

## 2.2.

### Mapping from Object to Digital Micromirror Device

Using the thin lens, an arbitrary plane in the object space will be imaged to a corresponding plane in the image space. The object plane is expressed as $a(x-{x}_{0})+by+cz=0$, where $({x}_{0},0,0)$ is the intersection point of the object plane and optical axis, thus we obtain an image plane:

## (1)

$$a(1+\frac{{x}_{0}}{f})({x}^{\prime}-\frac{f{x}_{0}}{f+{x}_{0}})+b{y}^{\prime}+x{z}^{\prime}=0.$$In Fig. 4, the object space coordinate system is defined by the intersection point $({x}_{0},0,0)$ of the object plane and optical axis which is taken as the origin, the $w$-axis which is taken to be vertical, the $v$-axis which is taken to be horizontal, and the $u$-axis which parallels the normal vector of the object image. The image space coordinate system is defined by the intersection point $[f{x}_{0}/(f+{x}_{0}),0,0]$ of the DMD plane and optical axis which is taken as the origin, the ${w}^{\prime}$-axis which is taken to be vertical, the ${v}^{\prime}$-axis which is taken to be horizontal, and the ${u}^{\prime}$-axis which parallels the normal vector of the DMD image. The angle $\theta $ between $u$ and the optical axis is expressed as $\mathrm{tan}\text{\hspace{0.17em}}\theta =b/a$. In the same way, the angle ${\theta}^{\prime}$ between ${u}^{\prime}$ and the optical axis is expressed as $\mathrm{tan}\text{\hspace{0.17em}}{\theta}^{\prime}=b/[a(1+{x}_{0}/f)]$. The relationship between ${\theta}^{\prime}$ and $\theta $ can be written as $\mathrm{tan}\text{\hspace{0.17em}}\theta =(1+{x}_{0}/f)\mathrm{tan}\text{\hspace{0.17em}}{\theta}^{\prime}$. ${\theta}^{\prime}$ is the incident angle in the DMD plane and is set to 24 deg to insure that the reflected angle is 0 deg. Thus, it can be seen that $\theta $ is only associated with ${x}_{0}$ which is the measurement distance when the focal length of Len2 is invariant. The mapping of Len2 from a 2-D point in the image plane to a 2-D point in the object plane is given by

## (2)

$$\{\begin{array}{l}v=-\frac{\mathrm{cos}\text{\hspace{0.17em}}{\theta}^{\prime}}{\mathrm{cos}\text{\hspace{0.17em}}\theta}\times \frac{1}{{\beta}_{2}-\frac{{v}^{\prime}}{f}\mathrm{sin}\text{\hspace{0.17em}}{\theta}^{\prime}}{v}^{\prime}\\ w=\frac{1}{{\beta}_{2}-\frac{{v}^{\prime}}{f}\mathrm{sin}\text{\hspace{0.17em}}{\theta}^{\prime}}{w}^{\prime}\end{array},$$## 2.3.

### Experimental Setup

The CCD in our experiment allows for 8 bits per pixel (bpp) of precision in the RAW mode and a resolution of $768\times 576$; each CCD is $6.8\times 6.8\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \text{m}$ in size. The DMD provides 8 bpp and a resolution of $684\times 608$; each mirror element is $7.6\times 7.6\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \text{m}$. Len1 is set to a paraxial magnification of 1.12. After mapping from the DMD to the CCD, our DMD camera has 200,000 effective pixels. The focal length of Len2 is 100 mm. The object distance and field of view (FOV) of this optical system is determined by the distance between Len2 and the DMD.

## 3.

## Removing Parasitic Image

## 3.1.

### Point Spread Function

As shown in Fig. 1(b), the sum of direct light and scattering glare saturates image point ${P}_{i}$ and increases the light intensity of its surrounding pixels. Scattering glare falls off rapidly away from the central point source. Direct light and a part of scattering glare near the central point source are high frequency components. However, another part of scattering glare far away from bright sources is low frequency component. Intensity distribution of the parasitic image is usually described by a point spread function (PSF), which is a function of the distance from the central point source. Based on statistical observation, the PSF caused by strong highlights due to specular reflection takes this form:

where $k$ is associated with the amplitude of direct light, and it is invariable for a central point source. $\sigma $ is a coefficient of the point spread. The smaller value of $\sigma $ is, the lower the intensity of scattering glare is, and the shorter the radius relative to the spread region is. $(u,v)$ is the position of the central point source. When incoming rays have an angular variation, the PSF is rewritten as## (4)

$$\left[\begin{array}{c}{r}_{\omega}(x,y)\\ {F}_{\omega}(x,y)\end{array}\right]=\left[\begin{array}{cc}\mathrm{cos}\text{\hspace{0.17em}}\omega & -\mathrm{sin}\text{\hspace{0.17em}}\omega \\ \mathrm{sin}\text{\hspace{0.17em}}\omega & \mathrm{cos}\text{\hspace{0.17em}}\omega \end{array}\right]\left[\begin{array}{c}r(x,y)\\ F(x,y)\end{array}\right]\phantom{\rule{0ex}{0ex}}\Rightarrow \{\begin{array}{c}{r}_{\omega}(x,y)=\mathrm{cos}\text{\hspace{0.17em}}\omega \xb7\sqrt{{x}^{2}+{y}^{2}}-\mathrm{sin}\text{\hspace{0.17em}}\omega \xb7F(x,y)\\ {F}_{\omega}(x,y)=\mathrm{sin}\text{\hspace{0.17em}}\omega \xb7\sqrt{{x}^{2}+{y}^{2}}+\mathrm{cos}\text{\hspace{0.17em}}\omega \xb7F(x,y)\end{array},$$## 3.2.

### Strategy

We assume that $I(x,y)$ is the intensity of the recorded image which is composed of a correction image $O(x,y)$ and a parasitic image $F(x,y)$.

In our camera system, the result of processing a captured image can be fed again into the DMD as a pattern, and this process could be repeated. So far as the DMD is concerned, the relationship between the incident intensity and reflected intensity is obtained by using an optical power meter. From Eq. (3), one can observe that $O(x,y)$ is linearly modulated by the DMD, but $F(x,y)$ is not. Based on this property, the process of the experiment, which achieves a set of recorded image ${I}_{k}(x,y)$ $k=0,1,\dots K$, is described as follows:

Step 1 Initialize the DMD pattern ${P}_{0}(u,v)$ to be a 684 by 608 matrix of 255. Obtain the intensity of the recorded image ${I}_{0}(x,y)$ that is totally reflected by the DMD.

Step 2 Based on clustering methods, select the threshold ${t}_{0}$ to determine the modulated region ${D}_{1}$ of the recorded image ${I}_{0}(x,y)$; threshold ${t}_{0}$ is determined by squared intensity differences between pixels and the cluster center; the low frequency component of the scattering glare depends on threshold ${t}_{0}$ to be segmented from the recorded image, but direct light and the high frequency component of the scattering glare are almost not affected by it.

Step 3 ${M}_{1}$ represents the corresponding region in the DMD pattern ${P}_{1}(u,v)$, which is given by

Step 4 Define the DMD pattern as

where $\alpha $ is the modulation scale factor.Step 5 Repeat steps 2, 3, and 4 until the modulated region has no pixels, and obtain the threshold ${t}_{k}$, $k=0$, $1\dots ,\mathrm{K}-1$, and the modulated region of the recorded image ${D}_{k}$, $k=0$, $1,\dots ,\mathrm{K}-1$ in addition to the recorded image ${I}_{k}(x,y)$.

${\widehat{I}}_{k}(x,y)$ represents the intensity estimate of ${I}_{k}(x,y)$, which is inversely computed by

Considering the maximum possible brightness value of the CCD intensity level, there are two cases of solving a parasitic image, to be described in Fig. 5. Solid and dotted curves represent intensities of ${I}_{k}(x,y)$ and ${\hat{I}}_{k}(x,y)$, respectively. Also, the scattering glare is marked by hachures. In Fig. 5(a), the region of glare is enclosed between the boundary lines of $D(255)$ and $D({t}_{k})$ with the condition that the boundary dash line of $D(255)$ is outside the boundary of $D({I}_{k}={\hat{I}}_{k})$. In Fig. 5(b), the region of glare is enclosed between the boundary lines of $D({I}_{k}={\hat{I}}_{k})$ and $D(k)$ with the condition that the boundary dash line of $D(255)$ is inside the boundary of $D({I}_{k}={\hat{I}}_{k})$. With the condition that the dash line of $D(255)$ is inside the boundary of $D({I}_{k}={\hat{I}}_{k})$, we can obtain the estimate of the parasitic image:

## (9)

$${\widehat{G}}_{k}(x,y)=[{I}_{k}(x,y)-{\widehat{I}}_{k}(x,y)]{\tilde{W}}_{k}(x,y)\in [D({t}_{k})-D({I}_{k}={\widehat{I}}_{k})],$$That $D[({t}_{k+1}-{t}_{k})\alpha +{t}_{k}]>D({I}_{k}={\widehat{I}}_{k})$ is a necessary condition of global parasitic image:

## (10)

$$\widehat{G}(x,y)=\sum _{k}^{K-1}{\widehat{G}}_{k}^{D({t}_{k})-D[({t}_{k+1}-{t}_{k})\alpha +{t}_{k}]}.$$Therefore, all selected thresholds in the experiment should be equal:

With another case where the dash line of $D(255)$ is outside the boundary of $D({I}_{k}={\hat{I}}_{k})$, we can obtain the estimate of the parasitic image:

## (12)

$${\widehat{G}}_{k}(x,y)=[{I}_{k}(x,y)-{\widehat{I}}_{k}(x,y)]{\tilde{W}}_{k}(x,y)\in [D({t}_{k})-{D}_{k}(255)].$$That $D[({t}_{k+1}-{t}_{k})\alpha +{t}_{k}]>{D}_{k}(255)$ is a necessary condition of the global parasitic image:

## (13)

$$\widehat{G}(x,y)=\sum _{k}^{K-1}{\widehat{G}}_{k}^{D[({t}_{k}-{t}_{k-1})\alpha +{t}_{k-1}]-{D}_{k}(255)}.$$Therefore, all selected thresholds in the experiment should be a monotone decreasing sequence:

From Eq. (5), we composite the individual subtracted captures together to form a complete image of the scene:

## (15)

$$O(x,y)=\sum _{k}^{K}[{I}_{k}{\tilde{W}}_{k}-{\widehat{G}}_{k}^{{D}_{k}({t}_{k})-{D}_{k+1}({t}_{k+1})}].$$DMD enables radiometric modulation of the imaged scene rays with very high precision and physically limits the amount of scattering glare created in the camera. Also, with the highlight intensity falling off, signal-to-noise ratios (SNRs) are increasing. A large $\alpha $ allows minimizing the highlight and increasing SNR, whereas IS ${I}_{k}(x,y)$ is needed to record a global scene. It is suggested that the regions $D[({t}_{k+1}-{t}_{k})\alpha +{t}_{k}]$ need to be as large as possible. This creates a tradeoff between the SNR and the integrity of the estimated parasitic image. Based on empirical observation, $\alpha =2$ is suitable for our experiment.

## 4.

## Implementation Results

We provide two examples showing the successful application of the proposed method. The object is mounted on the working scene of a DMD camera in a dark room. Projecting lights illuminate the object from a constant angle from the right side. One example is the removal of parasitic image on metal slice, as shown in Fig. 2. It can be observed that high reflective light due to specular reflection, produces the parasitic image on the left edge of the metallic slice and covers the geometric and textural information of the metallic edge which is the stamping region in the fabrication process. Removing the parasitic image by using the preceding strategy, a correction image is given in Fig. 6. Figure 6(a) shows the recorded image where the edge of the metallic slice is covered by the parasitic image. Figure 6(b) shows the correction image where the parasitic image is removed by our approach. Notice that the edge of the metallic slice is visible. Figure 6(c) shows the parasitic image that is recovered. Figure 6(d) depicts the 2-D luminance distribution of the parasitic image superimposed by the central point sources and spreading components.

The other example is the removal of the parasitic image on a metal hemisphere, as shown in Fig. 7. Figure 7(a) shows the imaging scene of the metal hemisphere. The dashed box is the close-up region where the geometric and textural information of the metal hemisphere are covered by the highly reflective light. Figures 7(b), 7(c), and 7(d) are the close-up image, correction image, and parasitic image, respectively. Luminance estimation of the parasitic image as a by-product of the preceding strategy is shown in Fig. 7(e).

Experimental results show that the proposed approach successfully remove a parasitic image on metal surfaces of different shapes and sizes. Without multiexposure imaging or multi-illumination, we can recover almost the full resolution information by the SLM strategy. The DMD camera achieves its flexibility by using a programmable array of micromirrors. With our method, the highlight is reduced before image formation. As the SNR is improved, a high quality image is provided. A simple algorithm containing modulated area recognition, precise region mapping, and separation of parasitic image and correction image is effective to decrease time-consumption in host processor.

## 5.

## Limitation

However, our method does suffer from limitations of precision and application. First, our method requires knowledge of the exact corresponding relationship between the CCD and DMD. Second, for achieving high accuracy mapping from the DMD to CCD, the depth of field of our setup should be limited to a small-scale range. Moreover, the high intensity of the central point source is not completely eliminated in our experiment. Third, our method can handle the highlight due to specular reflection, but the dynamic range of the DMD camera limits the removable parasitic image. The relationship between the DMD pixel digital value and reflectivity is given by

where $P{o}_{D}$ is the measured optical power; $Pi$ is optical power of incident light, and $D\in [0,{D}_{\mathrm{max}}]$ is the DMD level.Thus, we define the maximum removable parasitic image as

where $g(C)$ denotes the relational function between the CCD pixel digital value and the corresponding irradiance on CCD pixels; and $C\in [0,{C}_{\mathrm{max}}]$ is the CCD level.## 6.

## Conclusion

A parasitic image created by strong highlights due to specular reflection covers useful information and reduces image contrast. Removing the parasitic image is a widespread requirement in science, medicine, and photography. In this article, a DMD camera composed of a DMD, CPU, and CCD is developed to achieve programmable imaging. We have developed a new method for removing the parasitic image from an optical system, using iterative modulation with a DMD camera to remove and estimate the parasitic image. Meanwhile, the method obtains the estimate of the parasitic image and provides a novel pathway to analyze and evaluate the parasitic image in optical system. As experimental results, the parasitic images on metal surfaces of different shapes and sizes are successfully removed.

## Acknowledgments

This work was supported by the National Natural Science Foundation of China (grant no. 51275350), Tianjin Natural Science Foundation (grant no. 12JCYBJC11000), and Doctoral Fund of Ministry of Education of China (grant no. 20110032110045).

## References

S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10(4), 210–218 (1985).CREADU0361-2317http://dx.doi.org/10.1002/(ISSN)1520-6378Google Scholar

A. ArtusiF. BanterleD. Chetverikov, “A survey of specularity removal methods,” Comput. Graph. Forum 30(8), 2208–2230 (2011).CGFODY0167-7055http://dx.doi.org/10.1111/cgf.2011.30.issue-8Google Scholar

R. BajcsyS. LeeA. Leonardis, “Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation,” Int. J. Comput. Vision 17(3), 241–272 (1996).IJCVEQ0920-5691http://dx.doi.org/10.1007/BF00128233Google Scholar

H.-L. ShenQ.-Y. Cai, “Simple and efficient method for specularity removal in an image,” Appl. Opt. 48(14), 2711–2719 (2009).APOPAI0003-6935http://dx.doi.org/10.1364/AO.48.002711Google Scholar

H.-L. ShenZ.-H. Zheng, “Real-time highlight removal using intensity ratio,” Appl. Opt. 52(19), 4483–4493 (2013).APOPAI0003-6935http://dx.doi.org/10.1364/AO.52.004483Google Scholar

Q. YangS. WangN. Ahuja, “Real-time specular highlight removal using bilateral filtering,” in Proc. European Conf. on Computer Vision (ECCV), pp. 87–100, Springer-Verlag, Berlin, Heidelberger, Germany (2010).Google Scholar

Y. Y. SchechnerS. K. NayarP. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007).ITPIDJ0162-8828http://dx.doi.org/10.1109/TPAMI.2007.1151Google Scholar

A. Agrawalet al., “Removing photography artifacts using gradient projection and flash-exposure sampling,” in Proc. Acm Siggraph 2005 Conf., Acm Trans. Graphics, Vol. 24, pp. 828–835, Association Computing Machinery, New York, NY (2005).Google Scholar

A. Jaehyunet al., “A multi-exposure image fusion algorithm without ghost effect,” in Proc. 2011 IEEE Int. Conf. on Acoustics, Speech and Signal Process. (ICASSP), pp. 1565–1568, IEEE, New York, NY (2011).Google Scholar

B. BitlisP. A. JanssonJ. P. Allebach, “Parametric point spread function modeling and reduction of stray light effects in digital still cameras,” Proc. SPIE 6498, 29–31 (2007).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.715101Google Scholar

C. C. LiebeL. ScherrR. Willson, “Sun-induced veiling glare in dusty camera optics,” Opt. Eng. 43(2), 493–499 (2004).OPEGAR0091-3286http://dx.doi.org/10.1117/1.1635835Google Scholar

A. A. Goshtasby, “Fusion of multi-exposure images,” Image Vision Comput. 23(6), 611–618 (2005).IVCODK0262-8856http://dx.doi.org/10.1016/j.imavis.2005.02.004Google Scholar

S. K. NayarV. BranzoiT. E. Boult, “Programmable imaging: towards a flexible camera,” Int. J. Comput. Vision 70(1), 7–22 (2006).IJCVEQ0920-5691http://dx.doi.org/10.1007/s11263-005-3102-6Google Scholar

S. RiM. FujigakiY. Morimoto, “Single-shot three-dimensional shape measurement method using a digital micromirror device camera by fringe projection,” Opt. Eng. 48(10), 103605 (2009).OPEGAR0091-3286http://dx.doi.org/10.1117/1.3250197Google Scholar

M. AnkitR. RameshT. Jack, “Agile spectrum imaging: programmable wavelength modulation for cameras and projectors,” Comput. Graph. Forum 27(2), 709–717 (2008).CGFODY0167-7055http://dx.doi.org/10.1111/j.1467-8659.2008.01169.xGoogle Scholar

A. A. AdeyemiN. BarakatT. E. Darcie, “Applications of digital micro-mirror devices to digital optical microscope dynamic range enhancement,” Opt. Express 17(3), 1831–1843 (2009).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.17.001831Google Scholar

S. Riet al., “Accurate pixel-to-pixel correspondence adjustment in a digital micromirror device camera by using the phase-shifting moire method,” Appl. Opt. 45(27), 6940–6946 (2006).APOPAI0003-6935http://dx.doi.org/10.1364/AO.45.006940Google Scholar

## Biography

**Shou-Bo Zhao** received his BS and MS degrees in optical engineering from Tianjin University in 2008 and 2011, respectively. He is a PhD candidate in the State Key Lab of Precision Measuring Technology and Instruments, Tianjin University. He is interested in optical metrology using image processing and computational camera development.