26 January 2016 Aspherical surface profile fitting based on the relationship between polynomial and inner products
Abstract
High-precision aspherical polynomial fitting is essential to image quality evaluation in optical design and optimization. However, conventional fitting methods cannot reach optimal fitting precision and may somehow induce numerical ill-conditioning, such as excessively high coefficients. For this reason, a projection from polynomial equations to vector space was here proposed such that polynomial solutions could be obtained based on matrix and vector operation, so avoiding the problem of excessive coefficients. The Newton–Raphson iteration method was used to search for optimal fitting of the spherical surface. The profile fitting test showed that the proposed approach was able to obtain results with high precision and small value, which solved the numerical ill-conditioning phenomenon effectively.

## Introduction

With the rapid development of optical design and manufacturing technology, aspherical profiles have seen extensive use because of their lower rate of aberrations relative to spherical profiles.14 Improving the precision of fitting projects would facilitate the evaluation of image quality in optical design and optimization significantly. This is commonly done by increasing the number of samples or using high-order polynomials. Currently, there are a number of articles that discuss improving the precision of polynomial fitting. Because increasing the number of samples beyond a certain point will not markedly improve fitting precision,5 high-order polynomials have seen widespread use. The least square method will usually produce ill-conditioned Gram matrixes and so introduce large errors in solution. To solve the problem, the Gram–Schmidt process was used to generate mutually orthogonal polynomials in the unit circle.6,7 The Gauss–Newton algorithm was used to solve nonlinear least squares problems through iterations to reduce fitting errors. This method was reported to be able to further improve fitting precision.8,9

However, when fitting rotational symmetric surfaces using the nonorthogonal polynomials and least square method, the peak polynomial coefficient increases with increasing polynomial order such that massive coefficients may be derived in some cases.10 These coefficients are usually randomly oriented and several orders of magnitude higher than rise of arch, which might result in a much smaller number from the subtraction of two large numbers when calculating the rise of arch. In this way, the limited accuracy of numerical storage in computers leads to losses of effective numbers during computation. Specifically, it compromises the accuracy of the calculation of the surface slope, thereby rendering ray tracing inaccurate. In optical optimization processes, the loss of effective numbers causes numeric instability of the optimization algorithm, which is unacceptable in practical applications.11 Although the methods listed above could improve fitting precision, it is not possible to prevent the loss of effective numbers because of large coefficients. In this way, a projection from polynomial equations to vector space was proposed in this study, thereby introducing a new fitting method, achieving the optimal quadric fitting surface with better fitting errors as well as minimizing the magnitude of the aspheric coefficient.

In order to improve fitting precision effectively and solve the problem of excessive coefficients, it is necessary to analyze the sources of fitting errors and their relationship with the coefficients of polynomials. The coefficient of each order of polynomials is not directly correlated to fitting errors, so direct analysis is not efficient. Based on the characteristics of homogeneity and additivity of polynomials and vectors and on the similar definition of inner products and the simple linear relationship between components and modulus length of each vector and error in least square calculation, the problem can be solved by projecting polynomials into vectors. It is inevitable to calculate the inner products of polynomials while conducting profile fitting with either the linear or the nonlinear least square method. Because the definition of fitting error is directly related to inner products, the projection from polynomials to vector space can be built based on its relationship with inner products such that the polynomials are analyzed using vector analysis methods, thereby transforming the polynomial problem into a vector problem.

## 2.1.

### Projection of Polynomials into Vector Space

The polynomial equations consisting of $n$ polynomials were here projected into $n$-dimension vector space, with each polynomial projected into the $n$-dimension vector. The polynomials are written as ${\varphi }_{1},{\varphi }_{2},\cdots ,{\varphi }_{n}$, and the corresponding vectors as ${\mathbf{v}}_{1},{\mathbf{v}}_{2},\cdots ,{\mathbf{v}}_{n}$. In order to facilitate future calculations, the following projection principles were here adopted:

## (1)

$\left(1\right)\left({\phi }_{i},{\phi }_{i}\right)=\left({\mathbf{v}}_{i},{\mathbf{v}}_{i}\right),i=1,2,\cdots ,n\phantom{\rule{0ex}{0ex}}\left(2\right)\left({\phi }_{i},{\phi }_{j}\right)=\left({\mathbf{v}}_{i},{\mathbf{v}}_{j}\right),i,j=1,2,\cdots ,n,i\ne j\phantom{\rule{0ex}{0ex}}\left(3\right){\mathbf{v}}_{i}={\left({m}_{i1},{m}_{i2},\cdots ,{m}_{ii},0,\cdots ,0\right)}^{T}.$
The vectors are integrated as matrix $\mathbf{M}$

## (2)

$\mathbf{M}=\left[{\mathbf{v}}_{1},{\mathbf{v}}_{2},\cdots ,{\mathbf{v}}_{n}\right].$

One algorithm is to obtain matrix $\mathbf{M}$ that satisfies the projection principles through “Cholesky” decomposition of the “Gram” matrix in Eq. (3). From the Cholesky decomposition, a lower triangular matrix and an upper triangular matrix, which are mutual symmetric, could be obtained, where $\mathbf{M}$ is the upper triangular matrix. The polynomials coefficient matrix derived from fitting is recorded as $\mathbf{A}$, and the projected vector is $\mathbf{v}$. The coefficient matrix and the corresponding vector have the relationships as

## (3)

$\mathbf{v}=\mathbf{MA}$

## (4)

$\mathbf{A}={\mathbf{M}}^{-1}v,$
where $\mathbf{A}={\left({A}_{1},{A}_{2},\dots ,{A}_{n}\right)}^{T}$.

## 2.2.

### Principles of Aspheric Polynomial Surface Fitting

In aspheric polynomial surface fitting, the profile expression conforming to ISO 10110 standard is adopted12

## (5)

$z=\frac{c{r}^{2}}{1+\sqrt{1-\left(1+k\right){c}^{2}{r}^{2} }}+\sum _{j=0}{A}_{j}{r}^{2j+4}.$
Here, $r=\sqrt{{x}^{2}+{y}^{2}}$ is the perpendicular distance from surface points to the optic axis; $z$ is the rise of the arch, i.e., the perpendicular distance between surface points to the $xy$ plane; $c$ is the curvature of surface apex; $k$ is the conical constant; and ${A}_{j}$ is the polynomial coefficient.

Using optimal spherical surface methods, traditional fitting methods, based on surface features, first employed apex curvature and edge rise of arch to calculate quadric surface parameters $c$ and $k$.13,14 $d\left(r\right)$: $d\left(r\right)=z\left(r\right)-\frac{c{r}^{2}}{1+\sqrt{1-\left(1+k\right){c}^{2}{r}^{2}}}$ is the difference between fitting surface $z\left(r\right)$ and optimal fitting quadric surface.

The least square method is also generally used to fit residuals. The polynomials are represented as ${\varphi }_{0}\left(r\right)$, ${\varphi }_{1}\left(r\right),\dots ,{\varphi }_{n-1}\left(r\right)$ in rectangular coordinate systems; the fitted residual function is $f\left(r\right)$, and $W\left(r\right)$ is a weight function. The least square calculation is used to derive ${A}_{0},{A}_{2},\dots ,{A}_{n-1}$, which makes $\sum _{}^{}{\left(d-\sum _{}^{}{A}_{j}{r}^{2j+2}\right)}^{2}$ minimal. The solution equations are as follows:9,10

## (6)

$\mathbf{GA}=\mathbf{b}.$
The coefficients of every polynomial ${A}_{1},{A}_{2},\cdots ,{A}_{n}$, are obtained by solving Eq. (6), where $\mathbf{G}$ is the Gram matrix and its elements are given by Eq. (7), and $b$ is the calculation matrix and defined by $b={\left[\sum _{}^{}d·{r}^{4},\cdots ,\sum _{}^{}d·{r}^{2n+2}\right]}^{T}$. And $\mathbf{A}$ is the vector of coefficients defined by $\mathbf{A}={\left[{A}_{1},\cdots ,{A}_{n}\right]}^{T}$.

## (7)

${G}_{ij}=\sum _{}^{}{r}^{2i+4}{r}^{2j+4}.$

## 2.3.

As stated in Sec. 2.2, during the fitting process, the optimal quadric fitting surface is determined first, and then the least square method is used to fit the residuals. Using polynomials with more terms involves more dimensions of the corresponding vector space. The data to be fitted can be said to correspond to an infinite dimensional vector. The optimal quadric fitting surface is also represented by an infinite dimensional vector, as shown in Eq. (8), where ${\mathbf{v}}_{f}$ is the corresponding vector of the surface to be fitted, and ${\mathbf{v}}_{s}$ is the vector of the optimal quadric fitting surface.

## (8)

${\mathbf{v}}_{f}={\mathrm{lim}}_{i\to \infty }{\left[{\alpha }_{0},{\alpha }_{1},\cdots ,{\alpha }_{n-1},{\alpha }_{n},\cdots ,{\alpha }_{i}\right]}^{T}\phantom{\rule{0ex}{0ex}}{\mathbf{v}}_{s}={\mathrm{lim}}_{i\to \infty }{\left[{\beta }_{0}\left(c,k\right),{\beta }_{1}\left(c,k\right),\cdots ,{\beta }_{n-1}\left(c,k\right),{\beta }_{n}\left(c,k\right),\cdots ,{\beta }_{i}\left(c,k\right)\right]}^{T},$
where ${\alpha }_{i}$ and ${\beta }_{i}$ are the values of vector components, and ${\beta }_{i}$ represents the component values of the vector of the optimal quadric fitting surface, which could be represented as the function of $c$ and $k$. The vector ${\mathbf{v}}_{d}$ corresponding to residuals is shown

## (9)

${\mathbf{v}}_{d}={\mathrm{lim}}_{i\to \infty }{\left({\alpha }_{0}-{\beta }_{0},{\alpha }_{1}-{\beta }_{1},\cdots ,{\alpha }_{i}-{\beta }_{i}\right)}^{T}.$

During the fitting process, the use of one additional polynomial produces an additional vector component, and the fitting error ${\mathbit{v}}_{e}$ is proportional to the component modulus value in the undescribed dimension

## (10)

${\mathbf{v}}_{e}={\mathrm{lim}}_{i\to \infty }{\left(0,\cdots ,{\alpha }_{n}-{\beta }_{n},{\alpha }_{n+1}-{\beta }_{n+1},\cdots ,{\alpha }_{i}-{\beta }_{i}\right)}^{T}.$

When the components of vector ${\mathbf{v}}_{e}$ that exceeds $n$ dimensions have minimal modulus value, the modulus length of ${\mathbf{v}}_{e}$ is minimal, and the fitting precision is maximal. In this way, it can be deduced that there is an optimal quadric fitting surface that can yield the minimal modulus value of undescribed components

The determination of optimal quadric fitting surface is a nonlinear least square problem, which is commonly solved using the Gauss–Newton algorithm. The principle is to perform the first-order Taylor expansion on the base function to linearize it. Then the linear least square method is used in fitting, and the final results can be obtained upon iteration convergence. However, this method has several drawbacks. For example, it uses local quadratic convergence for the zero residual fitting problem, the convergence speed is low for small residual fitting problems, and there is no convergence for large residual fitting problems. Here, the residual refers to the discarded high-order terms in Taylor expansion. For this reason, the Newton–Raphson method was used in this study to perform the second-order Taylor expansion based on the definition formula of fitting residual. Because one more order of Taylor expansion was performed than in the Gauss–Newton method, more information from the function was preserved, so the method showed good convergence. The procedure is as follows:

The square sum $s$ of fitting residual $d$ is defined as follows: $d=z-\frac{c{r}^{2}}{1+\sqrt{1-\left(1+k\right){c}^{2}{r}^{2}}}-\sum _{i=0}{A}_{j}{r}^{2j+4}$, $s=\sum _{}^{}{d}^{2}$. The target of the algorithm is to find $c$ and $k$ that conform to $\frac{\partial s}{\partial c}=\frac{\partial s}{\partial k}=0$. First, the initial values of $c$ and $k$ are obtained based on conventional methods. Then, Eq. (11) was used in iteration so as to obtain new values of $c$ and $k$.

## (11)

$\left[\begin{array}{c}{c}_{n+1}\\ {k}_{n+1}\end{array}\right]=\left[\begin{array}{c}{c}_{n}\\ {k}_{n}\end{array}\right]-{\left[\begin{array}{cc}\frac{{\partial }^{2}s}{\partial {c}^{2}}& \frac{{\partial }^{2}s}{\partial c\partial k}\\ \frac{{\partial }^{2}s}{\partial c\partial k}& \frac{{\partial }^{2}s}{\partial {k}^{2}}\end{array}\right]}^{-1}\left[\begin{array}{c}\frac{\partial s}{\partial c}\\ \frac{\partial s}{\partial k}\end{array}\right].$

Because in the fitting process, ${A}_{j}$ is determined once $c$ and $k$ are determined, ${A}_{j}$ is correlated to $c$ and $k$.

## (12)

$\frac{\partial s}{\partial c}=\sum _{}^{}2d\left(\frac{\partial d}{\partial c}-\frac{\partial {a}_{i}}{\partial c}{r}^{2i+4}\right),\phantom{\rule{0ex}{0ex}}\left[\begin{array}{c}\frac{\partial {a}_{0}}{\partial c}\\ ⋮\\ \frac{\partial {a}_{n-1}}{\partial c}\end{array}\right]={\left[\begin{array}{ccc}\left({\phi }_{0},{\phi }_{0}\right)& \dots & \left({\phi }_{0},{\phi }_{n-1}\right)\\ ⋮& \ddots & ⋮\\ \left({\phi }_{n-1},{\phi }_{0}\right)& \cdots & \left({\phi }_{n-1},{\phi }_{n-1}\right)\end{array}\right]}^{-1}\left[\begin{array}{c}\left(\frac{\partial d}{\partial c},{r}^{4}\right)\\ ⋮\\ \left(\frac{\partial d}{\partial c},{r}^{2n+2}\right)\end{array}\right].$

The partial derivative of $k$ is obtained with similar method. The second-order partial derivative matrix was obtained based on the calculation of the first-order partial derivative. New values of $c$ and $k$ are continuously obtained according to the iteration formula until the iteration termination standard is achieved.

## 2.4.

### Methods to Decrease Polynomial Coefficients

Once the optimal quadric fitting surface is determined, the least square method is used to find the polynomial coefficients. If the polynomials have a large number of terms or the surface space frequency is high, there might be too many coefficients in the fitting results, and these can be several orders of magnitude higher than rise of arch. This is unfavorable in practical applications, so it is necessary to decrease the polynomials coefficients without affecting the fitting precision. The solving principle of the least square method involves determining the minimal root-mean-square (RMS) error, so the obtained small fitting coefficients can cause increased RMS (the least square method assures minimal RMS, but the coefficients can be large, and none of the operations on the coefficients assure minimal RMS). As shown, the high fitting precision and small coefficients cannot be obtained simultaneously. In this way, small coefficients could be obtained when a low fitting precision is acceptable. Proper coefficients could be obtained by limiting RMS and the range of polynomial coefficients. However, solving the multivariable quadratic inequality is difficult and time consuming. The size of the coefficients could be reduced by controlling the variation in polynomial coefficients and error-associated variables.

As stated above, additive error can be introduced while reducing polynomial coefficients. The $\mathrm{\Delta }\mathbf{A}$ change in the polynomial coefficients matrix introduces error $E$:

## (13)

$E=\sqrt{{‖\sum _{}^{}\mathrm{\Delta }{A}_{i}{f}_{i}‖}_{2}}=\sqrt{{‖\sum _{}^{}\mathrm{\Delta }{A}_{i}{v}_{i}‖}_{2}}=\sqrt{{‖M\mathrm{\Delta }\mathbf{A}‖}_{2}}.$
Here, $\mathrm{\Delta }\mathbf{A}={\left(\mathrm{\Delta }{A}_{1},\mathrm{\Delta }{A}_{2}\dots \mathrm{\Delta }{A}_{n}\right)}^{T}$, and $\mathrm{\Delta }{A}_{i}$ is the coefficient variation of the $i$ polynomial. It is known from Eq. (4) that $\mathrm{\Delta }\mathbf{A}={\mathbf{M}}^{-1}\mathrm{\Delta }\mathbf{v}$. In general, the absolute values of ${\mathbf{M}}^{-1}$ row elements increases from the left to the right so the last component of $\mathrm{\Delta }\mathbf{v}$ makes the largest contribution to the coefficient. In other words, given the same error, it can, to the greatest extent, change the polynomial coefficients if the error is allocated to the last component of the vector.

For this reason, the unit error vector is written as $\mathbf{u}={\left(0\text{\hspace{0.17em}}0\dots 1\right)}^{T}$, and the variation of polynomial coefficients is $\mathbf{B}=k{\mathbf{M}}^{-1}\mathbf{u}$ when $k$ times the unit error is introduced. Thus, the coefficient of new polynomials is $\mathbf{A}+\mathbf{B}$. The introduced error can be determined by taking minimal square sum of polynomials coefficients as the standard for coefficient reduction

## (14)

$k=-\frac{\left(\mathbf{A},\mathbf{B}\right)}{\left(\mathbf{B},\mathbf{B}\right)}=-\frac{\left({\mathbf{A},\mathbf{M}}^{-\mathbf{1}}\mathbf{u}\right)}{\left({\mathbf{M}}^{\mathbf{-}\mathbf{1}}{\mathbf{u},\mathbf{M}}^{-\mathbf{1}}\mathbf{u}\right)}.$

Finally, the coefficient matrix of new polynomials can be generated

## (15)

${\mathbf{A}}^{\prime }=\mathbf{A}+k{\mathbf{M}}^{-\mathbf{1}}\mathbf{u}=\mathbf{A}-\frac{\left({\mathbf{A},\mathbf{M}}^{-\mathbf{1}}\mathbf{u}\right)}{\left({\mathbf{M}}^{-\mathbf{1}}{\mathbf{u},\mathbf{M}}^{-\mathbf{1}}\mathbf{u}\right)}{\mathbf{M}}^{-\mathbf{1}}\mathbf{u}.$

The fitting error of new polynomials can be calculated based on additive error and fitting error of original polynomials other than sample points’ coordinates. From the perspective of vector dimensions, the fitting error of original polynomials is due to finite terms. For this reason, the dimension of corresponding vector is finite, resulting in undescribed high-dimension features. It is known that different dimensions of the corresponding vector of the additive error and original fitting error are orthogonal, which allows the errors to be integrated based on mean square synthesis method.

## Examples

In this section, the examples are illustrated. The aplanatic Descartes oval profile was used to validate the feasibility, discussing the departure part, and the optimal quadric fitting surface part. Another two examples, an extreme and a mild aspheric profile, will perform comprehensive validation of the method. In the calculation procedure, the Eq. (15) was applied after the optimal quadric fitting surface was determined by using the proposed method and the polynomial coefficients were solved with the least square method.

## 3.1.

### Aplanatic Descartes Oval Profile

The aplanatic Descartes oval profile which used by Forbes was used here to validate the feasibility of the method.10 Using the conventional method, standard aspheric polynomial fitting was performed. Results are shown in Table 1. The curves of Descartes oval and their fitting errors are shown in Fig. 1. As stated in Forbes’s article, standard aspheric polynomials are nonorthogonal, and the use of high-order terms in the fitting process can affect the coefficients of low-order terms, resulting in higher coefficients from more terms. In this way, the number of significant figures can decrease during fitting with more high-order polynomials, leading to nonsignificant increases in precision. In this paper, further analysis and deduction were conducted, and we demonstrate that the results were not optimal either in terms of precision or in terms of coefficient magnitude. The effectiveness of our proposed method was proved using two conditions: (1) excessive coefficients and (2) suboptimal fitting precision from optimal quadric fitting surface.

## Table 1

Fitting result using aspheric polynomials for Descartes oval.

No. of termsTraditional methodImprovement for departureImprovement for optimal spherical fitting surface
FiveSevenEightEightEight
c (${\mathrm{mm}}^{-1}$)$-0.5$$-0.5$$-0.5$$-0.5$$-0.5$
k$-0.843716$$-0.843716$$-0.843716$$-0.843716$$-0.759214$
${A}_{1}$ (mm)$2.57757×{10}^{-1}$$2.69713×{10}^{-1}$$2.73894×{10}^{-1}$$2.71357×{10}^{-1}$$4.70730×{10}^{-1}$
${A}_{2}$ (mm)$-5.3405×{10}^{-2}$$-1.45314×{10}^{-1}$$-2.19449×{10}^{-1}$$-1.74068×{10}^{-1}$$-7.2462×{10}^{-2}$
${A}_{3}$ (mm)$-4.33431×{10}^{-1}$$-2.72898×{10}^{-1}$$2.28076×{10}^{-1}$$-8.1338×{10}^{-2}$$8.9275×{10}^{-2}$
${A}_{4}$ (mm)$5.73476×{10}^{-1}$$8.36713×{10}^{-1}$$-8.81902×{10}^{-1}$$1.89150×{10}^{-1}$$-4.788×{10}^{-3}$
${A}_{5}$ (mm)$-3.44232×{10}^{-1}$$-1.45812×{10}^{0}$$1.834551×{10}^{0}$$-2.36148×{10}^{-1}$$-2.7192×{10}^{-2}$
${A}_{6}$ (mm)$1.19517×{10}^{-1}$$-2.374638×{10}^{0}$$-1.09050×{10}^{-1}$$7.1639×{10}^{-2}$
${A}_{7}$ (mm)$-4.25257×{10}^{-1}$$1.622536×{10}^{0}$$3.10880×{10}^{-1}$$-5.6685×{10}^{-2}$
${A}_{8}$ (mm)$-4.83067×{10}^{-1}$$-1.70768×{10}^{-1}$$2.2460×{10}^{-2}$
RMS(mm)$4.36995×{10}^{-5}$$3.5701×{10}^{-6}$$9.8508×{10}^{-7}$$2.6815×{10}^{-6}$$1.2038×{10}^{-8}$

## Fig. 1

Descartes oval (pink line “a”: Descartes oval, green line “b”: the error of improvement method for departure, blue line “c”: the error of traditional method, red line “d”: the error of improvement method for optimal spherical fitting surface).

The data in the table show that, when there are eight polynomial terms, the coefficients are large, two orders of magnitude greater than the maximal value of the point of departure. These very large coefficients affect the system optimization process and so need to be decreased. According to the proposed method, taking the eight-term aspheric polynomials, e.g., the polynomials were projected into eight-dimensional vector space, and the eight vectors ${\mathbf{v}}_{1},{\mathbf{v}}_{2}\dots {\mathbf{v}}_{8}$ are here represented as matrix $\mathbf{M}$

$M=\left[\begin{array}{cccccccc}4.071e+0& 3.378e+0& 2.898e+0& 2.546e+0& 2.278e+0& 2.066e+0& 1.895e+0& 1.754e+0\\ 0& 6.239e-1& 9.275e-1& 1.078e+0& 1.150e+0& 1.179e+0& 1.185e+0& 1.176e+0\\ 0& 0& 1.245e-1& 2.590e-1& 3.747e-1& 4.675e-1& 5.398e-1& 5.953e-1\\ 0& 0& 0& 2.733e-2& 7.217e-2& 1.242e-1& 1.769e-1& 2.269e-1\\ 0& 0& 0& 0& 6.271e-3& 1.997e-2& 3.969e-2& 6.345e-2\\ 0& 0& 0& 0& 0& 1.474e-3& 5.478e-3& 1.232e-2\\ 0& 0& 0& 0& 0& 0& 3.513e-4& 1.489e-3\\ 0& 0& 0& 0& 0& 0& 0& 8.435e-5\end{array}\right].$

The associated variables between polynomial coefficients variation and introduced error can be defined and controlled for optimization, facilitating reduction of coefficient size. The results showed that the coefficients decreased into the same level as in fitting of five-term polynomials, while the fitting precision was kept at a level similar to that of the fitting of seven-term polynomials.

Similarly, based on the deduction shown in Sec. 3, the Newton–Raphson iteration method was used to search for optimal fitting spherical surface. Results are shown in Table 1. The optimal fitting spherical surface is distinct from that produced using conventional methods. After determining the optimal quadric fitting surface using the Raphson method, the polynomial coefficients were solved, and the fitting error was dramatically decreased.

## 3.2.

### Comprehensive Validation Using an Aspheric Surface

The examples given above are experimental validation of the new method in terms of the departure part and the optimal quadric fitting surface part. The following example will perform comprehensive validation for the above two parts. The surface as shown in Fig. 2 is the measured contour from a fabricated lens used in a cell phone. It was fitted with the proposed method, and the results are as shown in Table 2. The results demonstrated that, due to the variation in the optimal quadric fitting surface, the vector corresponding to the point of departure shows high-dimensional components with lower values, resulting in more described surface information in the polynomials, so retaining high fitting precision. Associated variables between polynomials coefficients variation and introduced error were obtained, and this could significantly change coefficient range while keeping the error similar. The results also show that the proposed method is effective for a measured profile as well as the nominal starting values of a surface.

## Fig. 2

An aspheric surface profile used in cellphone lens (pink line “a”: the aspheric surface, blue line “b”: the error of traditional method, green line “c”: the error of the proposed method).

## Table 2

Fitting result using aspheric polynomials for an aspheric surface.

FiveEightEight
c (${\mathrm{mm}}^{-1}$)$-6.64404$$-6.64404$$-4.83301$
k$-2.42266×{10}^{1}$$-2.42266×{10}^{1}$$-1.50273×{10}^{0}$
${A}_{1}$ (mm)$-1.22806×{10}^{1}$$-2.89211×{10}^{1}$$1.66517×{10}^{-1}$
${A}_{2}$ (mm)$4.92324×{10}^{1}$$2.37090×{10}^{2}$$7.30280×{10}^{0}$
${A}_{3}$ (mm)$-7.79851×{10}^{1}$$-9.09388×{10}^{2}$$-2.00049×{10}^{1}$
${A}_{4}$ (mm)$5.59258×{10}^{1}$$1.97371×{10}^{3}$$2.01305×{10}^{1}$
${A}_{5}$ (mm)$-1.50155×{10}^{1}$$-2.54711×{10}^{3}$$-3.30800×{10}^{-2}$
${A}_{6}$ (mm)$1.93555×{10}^{3}$$-1.56877×{10}^{1}$
${A}_{7}$ (mm)$-7.99407×{10}^{2}$$1.16211×{10}^{1}$
${A}_{8}$ (mm)$1.38363×{10}^{2}$$-2.74648×{10}^{0}$
RMS (mm)$2.39863×{10}^{-2}$$7.51581×{10}^{-3}$$6.21144×{10}^{-4}$

## 3.3.

### Validation Using a Mild Aspheric Surface

The examples in Secs. 3.1 and 3.2 are extreme aspheric surfaces. In this section, the proposed method is also applied to the aspheric surface fitting of a mild profile, the contour from a imaging lens. The fitting results were shown in Fig. 3 and Table 3, which approved the efficiency of the method.

## Fig. 3

The aspheric surface of a mild profile (red line “a”: the aspheric surface, blue line “b”: the error of traditional method, green line “c”: the error of the proposed method).

## Table 3

Fitting result using aspheric polynomials for an aspheric surface of mild profile.

FiveEightEight
c (${\mathrm{mm}}^{-1}$)$-1.00093$$-1.00093$$-0.737635$
k$-2.18644×{10}^{0}$$-2.18644×{10}^{0}$$-6.60064×{10}^{1}$
${A}_{1}$ (mm)$1.37466×{10}^{0}$$3.11691×{10}^{0}$$-1.09348×{10}^{0}$
${A}_{2}$ (mm)$-4.45997×{10}^{0}$$-2.19149×{10}^{1}$$3.05223×{10}^{0}$
${A}_{3}$ (mm)$5.95359×{10}^{0}$$7.45048×{10}^{1}$$-4.91347×{10}^{0}$
${A}_{4}$ (mm)$-3.66472×{10}^{0}$$-1.43932×{10}^{2}$$3.33001×{10}^{0}$
${A}_{5}$ (mm)$8.48571×{10}^{-1}$$1.65021×{10}^{2}$$1.09858×{10}^{0}$
${A}_{6}$ (mm)$-1.11146×{10}^{2}$$-3.27368×{10}^{0}$
${A}_{7}$ (mm)$4.06120×{10}^{1}$$1.94461×{10}^{0}$
${A}_{8}$ (mm)$-6.21157×{10}^{0}$$-3.93294×{10}^{-1}$
RMS (mm)$3.22914×{10}^{-3}$$1.20238×{10}^{-3}$$1.874801×{10}^{-4}$

## Conclusions

While performing surface fitting with the least square method, polynomial coefficients often become far larger than maximal rise of arch of the fitted data. This causes difficulties in optical system design and optimization. In this paper, a projection from polynomials to vector was proposed such that the polynomial problem was transformed into a vector problem. According to the proposed projection principle, the mutual projection between polynomials and vector could determine the relationship between the vector and the polynomial coefficients, and further determine the relationship between introduced errors and polynomial coefficient variation. The polynomial coefficients were reduced based on the standard of minimal square sum. For the first time, the Newton–Raphson method was used to search for optimal fitting surface, which not only improved the fitting precision but also had better convergence than the Gauss–Newton method. Through fitting tests of various surface profiles, the proposed method was proven to be effective in terms of fitting error and the range of the coefficients. Thus, it is of significance in practical application. The proposed algorithm has been programmed with MATLAB and available for a Dynamic Data Exchange connection to optical design software.

## Acknowledgments

This work was supported in part by the grant from the National Natural Science Foundation of China (Nos. 61275003, 51327005, and 91420203).

## References

1. A. Yoshikazu et al., “On-machine measurement of aspherical surface profile,” Nanotechnol. Precis. Eng. 2(3), 210–216 (2004). Google Scholar

2. D. Huang et al., “Shape measurement of aspheric plastic lens with large angle,” Chin. Opt. Lett. 3(9), 516–519 (2005).CJOEE31671-7694 Google Scholar

3. R. Q. Wu et al., “Analysis of athermalizing performance of thermal infrared with Cassegrain antenna,” Optik 121, 1904–1907 (2010).OTIKAJ0030-4026 http://dx.doi.org/10.1016/j.ijleo.2009.05.012 Google Scholar

4. S. J. Ludwick et al., “Design of a rotary fast tool servo for ophthalmic lens fabrication,” Prec. Eng. 23, 253–259 (1999). http://dx.doi.org/10.1016/S0141-6359(99)00017-3 Google Scholar

5. S. L. Xie and Y. Zhong, “Research of sampling point number in wave front fitting with Zernike polynomials,” Laser Technol. 35(2), 272–277 (2011). Google Scholar

6. D. Hilbig et al., “Fitting discrete aspherical surface sag data using orthonormal polynomials,” Opt. Exp. 23(17), 22404–22413 (2015). http://dx.doi.org/10.1364/OE.23.022404 Google Scholar

7. G. W. Forbes, “Characterizing the shape of freeform optics,” Opt. Exp. 20(3), 2483–2499 (2012).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.20.002483 Google Scholar

8. W. Sun, J.W. McBride and M. Hill, “A new approach to characterising aspheric surfaces,” Prec. Eng. 34, 171–179 (2010). http://dx.doi.org/10.1016/j.precisioneng.2009.05.005 Google Scholar

9. M. L. N. Gonçalvesa and P. R. Oliveirab, “Convergence of the Gauss–Newton method for a special class of systems of equations under a majorant condition,” Optimization 64(3), 577–594 (2015).OPTZDQ http://dx.doi.org/10.1080/02331934.2013.778854 Google Scholar

10. G. W. Forbes, “Shape specification for axially symmetric optical surfaces,” Opt. Exp. 15(8), 5218–5226 (2007).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.15.005218 Google Scholar

11. G. W. Forbes, “Robust and fast computation for the polynomials of optics,” Opt. Exp. 18(13), 13851–13862 (2010).OPEXFF1094-4087 http://dx.doi.org/10.1364/OE.18.013851 Google Scholar

12. G. H. Spencer and M. V. R. K. Murty, “General ray-tracing procedure,” J. Opt. Soc. Am. 52(6), 672–678 (1962).JOSAAH0030-3941 http://dx.doi.org/10.1364/JOSA.52.000672 Google Scholar

13. H. Aceves-Campos, “Profile identification of aspheric lenses,” Appl. Opt. 37(34), 8149–8150 (1998).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.37.008149 Google Scholar

14. Z. Zhang, “Parameter estimation techniques: a tutorial with application to conic fitting,” Image Vision Comput. 15, 59–76 (1997). http://dx.doi.org/10.1016/S0262-8856(96)01112-2 Google Scholar

## Biography

Xuemin Cheng is an associate professor at the Graduate School at Shenzhen, Tsinghua University. She received her PhD in optical engineering from Beijing Institute of Technology, China, in 2004. She is author of 56 peer reviewed publications related to optical design, vision, and image quality assessment. Her current research interests include optical design and engineering, including Imaging system design, illumination system design, optical modeling and simulation, etc. She is a member of SPIE.

Yikang Yang received his bachelor’s degree in precise instrumentation from Tianjin University, Tianjin, China, in 2013. He is currently a master's degree candidate at the Tsinghua University.

Qun Hao obtained her PhD in optical instruments from Tsinghua University, Beijing, China in 1998. She is currently a professor in the Department of Optoelectronics of Beijing Institute of Technology (BIT) in Beijing, China, and served as dean of the School of Optoelectronics at BIT.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Xuemin Cheng, Xuemin Cheng, Yikang Yang, Yikang Yang, Qun Hao, Qun Hao, "Aspherical surface profile fitting based on the relationship between polynomial and inner products," Optical Engineering 55(1), 015105 (26 January 2016). https://doi.org/10.1117/1.OE.55.1.015105 . Submission:
JOURNAL ARTICLE
6 PAGES

SHARE
KEYWORDS
RELATED CONTENT