## 1.

## Introduction

With the rapid development of optical design and manufacturing technology, aspherical profiles have seen extensive use because of their lower rate of aberrations relative to spherical profiles.^{1}2.3.^{–}^{4} Improving the precision of fitting projects would facilitate the evaluation of image quality in optical design and optimization significantly. This is commonly done by increasing the number of samples or using high-order polynomials. Currently, there are a number of articles that discuss improving the precision of polynomial fitting. Because increasing the number of samples beyond a certain point will not markedly improve fitting precision,^{5} high-order polynomials have seen widespread use. The least square method will usually produce ill-conditioned Gram matrixes and so introduce large errors in solution. To solve the problem, the Gram–Schmidt process was used to generate mutually orthogonal polynomials in the unit circle.^{6}^{,}^{7} The Gauss–Newton algorithm was used to solve nonlinear least squares problems through iterations to reduce fitting errors. This method was reported to be able to further improve fitting precision.^{8}^{,}^{9}

However, when fitting rotational symmetric surfaces using the nonorthogonal polynomials and least square method, the peak polynomial coefficient increases with increasing polynomial order such that massive coefficients may be derived in some cases.^{10} These coefficients are usually randomly oriented and several orders of magnitude higher than rise of arch, which might result in a much smaller number from the subtraction of two large numbers when calculating the rise of arch. In this way, the limited accuracy of numerical storage in computers leads to losses of effective numbers during computation. Specifically, it compromises the accuracy of the calculation of the surface slope, thereby rendering ray tracing inaccurate. In optical optimization processes, the loss of effective numbers causes numeric instability of the optimization algorithm, which is unacceptable in practical applications.^{11} Although the methods listed above could improve fitting precision, it is not possible to prevent the loss of effective numbers because of large coefficients. In this way, a projection from polynomial equations to vector space was proposed in this study, thereby introducing a new fitting method, achieving the optimal quadric fitting surface with better fitting errors as well as minimizing the magnitude of the aspheric coefficient.

In order to improve fitting precision effectively and solve the problem of excessive coefficients, it is necessary to analyze the sources of fitting errors and their relationship with the coefficients of polynomials. The coefficient of each order of polynomials is not directly correlated to fitting errors, so direct analysis is not efficient. Based on the characteristics of homogeneity and additivity of polynomials and vectors and on the similar definition of inner products and the simple linear relationship between components and modulus length of each vector and error in least square calculation, the problem can be solved by projecting polynomials into vectors. It is inevitable to calculate the inner products of polynomials while conducting profile fitting with either the linear or the nonlinear least square method. Because the definition of fitting error is directly related to inner products, the projection from polynomials to vector space can be built based on its relationship with inner products such that the polynomials are analyzed using vector analysis methods, thereby transforming the polynomial problem into a vector problem.

## 2.

## Methods

## 2.1.

### Projection of Polynomials into Vector Space

The polynomial equations consisting of $n$ polynomials were here projected into $n$-dimension vector space, with each polynomial projected into the $n$-dimension vector. The polynomials are written as ${\varphi}_{1},{\varphi}_{2},\cdots ,{\varphi}_{n}$, and the corresponding vectors as ${\mathbf{v}}_{1},{\mathbf{v}}_{2},\cdots ,{\mathbf{v}}_{n}$. In order to facilitate future calculations, the following projection principles were here adopted:

## (1)

$$(1)({\phi}_{i},{\phi}_{i})=({\mathbf{v}}_{i},{\mathbf{v}}_{i}),i=\mathrm{1,2},\cdots ,n\phantom{\rule{0ex}{0ex}}(2)({\phi}_{i},{\phi}_{j})=({\mathbf{v}}_{i},{\mathbf{v}}_{j}),i,j=\mathrm{1,2},\cdots ,n,i\ne j\phantom{\rule{0ex}{0ex}}(3){\mathbf{v}}_{i}={({m}_{i1},{m}_{i2},\cdots ,{m}_{ii},0,\cdots ,0)}^{T}.$$One algorithm is to obtain matrix $\mathbf{M}$ that satisfies the projection principles through “Cholesky” decomposition of the “Gram” matrix in Eq. (3). From the Cholesky decomposition, a lower triangular matrix and an upper triangular matrix, which are mutual symmetric, could be obtained, where $\mathbf{M}$ is the upper triangular matrix. The polynomials coefficient matrix derived from fitting is recorded as $\mathbf{A}$, and the projected vector is $\mathbf{v}$. The coefficient matrix and the corresponding vector have the relationships as

where $\mathbf{A}={({A}_{1},{A}_{2},\dots ,{A}_{n})}^{T}$.## 2.2.

### Principles of Aspheric Polynomial Surface Fitting

In aspheric polynomial surface fitting, the profile expression conforming to ISO 10110 standard is adopted^{12}

## (5)

$$z=\frac{c{r}^{2}}{1+\sqrt{1-(1+k){c}^{2}{r}^{2}\hspace{0.17em}}}+\sum _{j=0}{A}_{j}{r}^{2j+4}.$$Using optimal spherical surface methods, traditional fitting methods, based on surface features, first employed apex curvature and edge rise of arch to calculate quadric surface parameters $c$ and $k$.^{13}^{,}^{14} $d(r)$: $d(r)=z(r)-\frac{c{r}^{2}}{1+\sqrt{1-(1+k){c}^{2}{r}^{2}}}$ is the difference between fitting surface $z(r)$ and optimal fitting quadric surface.

The least square method is also generally used to fit residuals. The polynomials are represented as ${\varphi}_{0}(r)$, ${\varphi}_{1}(r),\dots ,{\varphi}_{n-1}(r)$ in rectangular coordinate systems; the fitted residual function is $f(r)$, and $W(r)$ is a weight function. The least square calculation is used to derive ${A}_{0},{A}_{2},\dots ,{A}_{n-1}$, which makes $\sum _{}^{}{(d-\sum _{}^{}{A}_{j}{r}^{2j+2})}^{2}$ minimal. The solution equations are as follows:^{9}^{,}^{10}

## 2.3.

### Optimal Quadric Fitting Surface

As stated in Sec. 2.2, during the fitting process, the optimal quadric fitting surface is determined first, and then the least square method is used to fit the residuals. Using polynomials with more terms involves more dimensions of the corresponding vector space. The data to be fitted can be said to correspond to an infinite dimensional vector. The optimal quadric fitting surface is also represented by an infinite dimensional vector, as shown in Eq. (8), where ${\mathbf{v}}_{f}$ is the corresponding vector of the surface to be fitted, and ${\mathbf{v}}_{s}$ is the vector of the optimal quadric fitting surface.

## (8)

$${\mathbf{v}}_{f}={\mathrm{lim}}_{i\to \infty}{[{\alpha}_{0},{\alpha}_{1},\cdots ,{\alpha}_{n-1},{\alpha}_{n},\cdots ,{\alpha}_{i}]}^{T}\phantom{\rule{0ex}{0ex}}{\mathbf{v}}_{s}={\mathrm{lim}}_{i\to \infty}{[{\beta}_{0}(c,k),{\beta}_{1}(c,k),\cdots ,{\beta}_{n-1}(c,k),{\beta}_{n}(c,k),\cdots ,{\beta}_{i}(c,k)]}^{T},$$## (9)

$${\mathbf{v}}_{d}={\mathrm{lim}}_{i\to \infty}{({\alpha}_{0}-{\beta}_{0},{\alpha}_{1}-{\beta}_{1},\cdots ,{\alpha}_{i}-{\beta}_{i})}^{T}.$$During the fitting process, the use of one additional polynomial produces an additional vector component, and the fitting error ${\mathit{v}}_{e}$ is proportional to the component modulus value in the undescribed dimension

## (10)

$${\mathbf{v}}_{e}={\mathrm{lim}}_{i\to \infty}{(0,\cdots ,{\alpha}_{n}-{\beta}_{n},{\alpha}_{n+1}-{\beta}_{n+1},\cdots ,{\alpha}_{i}-{\beta}_{i})}^{T}.$$When the components of vector ${\mathbf{v}}_{e}$ that exceeds $n$ dimensions have minimal modulus value, the modulus length of ${\mathbf{v}}_{e}$ is minimal, and the fitting precision is maximal. In this way, it can be deduced that there is an optimal quadric fitting surface that can yield the minimal modulus value of undescribed components

The determination of optimal quadric fitting surface is a nonlinear least square problem, which is commonly solved using the Gauss–Newton algorithm. The principle is to perform the first-order Taylor expansion on the base function to linearize it. Then the linear least square method is used in fitting, and the final results can be obtained upon iteration convergence. However, this method has several drawbacks. For example, it uses local quadratic convergence for the zero residual fitting problem, the convergence speed is low for small residual fitting problems, and there is no convergence for large residual fitting problems. Here, the residual refers to the discarded high-order terms in Taylor expansion. For this reason, the Newton–Raphson method was used in this study to perform the second-order Taylor expansion based on the definition formula of fitting residual. Because one more order of Taylor expansion was performed than in the Gauss–Newton method, more information from the function was preserved, so the method showed good convergence. The procedure is as follows:

The square sum $s$ of fitting residual $d$ is defined as follows: $d=z-\frac{c{r}^{2}}{1+\sqrt{1-(1+k){c}^{2}{r}^{2}}}-\sum _{i=0}{A}_{j}{r}^{2j+4}$, $s=\sum _{}^{}{d}^{2}$. The target of the algorithm is to find $c$ and $k$ that conform to $\frac{\partial s}{\partial c}=\frac{\partial s}{\partial k}=0$. First, the initial values of $c$ and $k$ are obtained based on conventional methods. Then, Eq. (11) was used in iteration so as to obtain new values of $c$ and $k$.

## (11)

$$\left[\begin{array}{c}{c}_{n+1}\\ {k}_{n+1}\end{array}\right]=\left[\begin{array}{c}{c}_{n}\\ {k}_{n}\end{array}\right]-{\left[\begin{array}{cc}\frac{{\partial}^{2}s}{\partial {c}^{2}}& \frac{{\partial}^{2}s}{\partial c\partial k}\\ \frac{{\partial}^{2}s}{\partial c\partial k}& \frac{{\partial}^{2}s}{\partial {k}^{2}}\end{array}\right]}^{-1}\left[\begin{array}{c}\frac{\partial s}{\partial c}\\ \frac{\partial s}{\partial k}\end{array}\right].$$Because in the fitting process, ${A}_{j}$ is determined once $c$ and $k$ are determined, ${A}_{j}$ is correlated to $c$ and $k$.

## (12)

$$\frac{\partial s}{\partial c}=\sum _{}^{}2d(\frac{\partial d}{\partial c}-\frac{\partial {a}_{i}}{\partial c}{r}^{2i+4}),\phantom{\rule{0ex}{0ex}}\left[\begin{array}{c}\frac{\partial {a}_{0}}{\partial c}\\ \vdots \\ \frac{\partial {a}_{n-1}}{\partial c}\end{array}\right]={\left[\begin{array}{ccc}({\phi}_{0},{\phi}_{0})& \dots & ({\phi}_{0},{\phi}_{n-1})\\ \vdots & \ddots & \vdots \\ ({\phi}_{n-1},{\phi}_{0})& \cdots & ({\phi}_{n-1},{\phi}_{n-1})\end{array}\right]}^{-1}\left[\begin{array}{c}(\frac{\partial d}{\partial c},{r}^{4})\\ \vdots \\ (\frac{\partial d}{\partial c},{r}^{2n+2})\end{array}\right]\mathrm{.}$$The partial derivative of $k$ is obtained with similar method. The second-order partial derivative matrix was obtained based on the calculation of the first-order partial derivative. New values of $c$ and $k$ are continuously obtained according to the iteration formula until the iteration termination standard is achieved.

## 2.4.

### Methods to Decrease Polynomial Coefficients

Once the optimal quadric fitting surface is determined, the least square method is used to find the polynomial coefficients. If the polynomials have a large number of terms or the surface space frequency is high, there might be too many coefficients in the fitting results, and these can be several orders of magnitude higher than rise of arch. This is unfavorable in practical applications, so it is necessary to decrease the polynomials coefficients without affecting the fitting precision. The solving principle of the least square method involves determining the minimal root-mean-square (RMS) error, so the obtained small fitting coefficients can cause increased RMS (the least square method assures minimal RMS, but the coefficients can be large, and none of the operations on the coefficients assure minimal RMS). As shown, the high fitting precision and small coefficients cannot be obtained simultaneously. In this way, small coefficients could be obtained when a low fitting precision is acceptable. Proper coefficients could be obtained by limiting RMS and the range of polynomial coefficients. However, solving the multivariable quadratic inequality is difficult and time consuming. The size of the coefficients could be reduced by controlling the variation in polynomial coefficients and error-associated variables.

As stated above, additive error can be introduced while reducing polynomial coefficients. The $\mathrm{\Delta}\mathbf{A}$ change in the polynomial coefficients matrix introduces error $E$:

## (13)

$$E=\sqrt{{\Vert \sum _{}^{}\mathrm{\Delta}{A}_{i}{f}_{i}\Vert}_{2}}=\sqrt{{\Vert \sum _{}^{}\mathrm{\Delta}{A}_{i}{v}_{i}\Vert}_{2}}=\sqrt{{\Vert M\mathrm{\Delta}\mathbf{A}\Vert}_{2}}.$$For this reason, the unit error vector is written as $\mathbf{u}={(0\text{\hspace{0.17em}}0\dots 1)}^{T}$, and the variation of polynomial coefficients is $\mathbf{B}=k{\mathbf{M}}^{-1}\mathbf{u}$ when $k$ times the unit error is introduced. Thus, the coefficient of new polynomials is $\mathbf{A}+\mathbf{B}$. The introduced error can be determined by taking minimal square sum of polynomials coefficients as the standard for coefficient reduction

## (14)

$$k=-\frac{(\mathbf{A},\mathbf{B})}{(\mathbf{B},\mathbf{B})}=-\frac{({\mathbf{A},\mathbf{M}}^{-\mathbf{1}}\mathbf{u})}{({\mathbf{M}}^{\mathbf{-}\mathbf{1}}{\mathbf{u},\mathbf{M}}^{-\mathbf{1}}\mathbf{u})}.$$Finally, the coefficient matrix of new polynomials can be generated

## (15)

$${\mathbf{A}}^{\prime}=\mathbf{A}+k{\mathbf{M}}^{-\mathbf{1}}\mathbf{u}=\mathbf{A}-\frac{({\mathbf{A},\mathbf{M}}^{-\mathbf{1}}\mathbf{u})}{({\mathbf{M}}^{-\mathbf{1}}{\mathbf{u},\mathbf{M}}^{-\mathbf{1}}\mathbf{u})}{\mathbf{M}}^{-\mathbf{1}}\mathbf{u}.$$The fitting error of new polynomials can be calculated based on additive error and fitting error of original polynomials other than sample points’ coordinates. From the perspective of vector dimensions, the fitting error of original polynomials is due to finite terms. For this reason, the dimension of corresponding vector is finite, resulting in undescribed high-dimension features. It is known that different dimensions of the corresponding vector of the additive error and original fitting error are orthogonal, which allows the errors to be integrated based on mean square synthesis method.

## 3.

## Examples

In this section, the examples are illustrated. The aplanatic Descartes oval profile was used to validate the feasibility, discussing the departure part, and the optimal quadric fitting surface part. Another two examples, an extreme and a mild aspheric profile, will perform comprehensive validation of the method. In the calculation procedure, the Eq. (15) was applied after the optimal quadric fitting surface was determined by using the proposed method and the polynomial coefficients were solved with the least square method.

## 3.1.

### Aplanatic Descartes Oval Profile

The aplanatic Descartes oval profile which used by Forbes was used here to validate the feasibility of the method.^{10} Using the conventional method, standard aspheric polynomial fitting was performed. Results are shown in Table 1. The curves of Descartes oval and their fitting errors are shown in Fig. 1. As stated in Forbes’s article, standard aspheric polynomials are nonorthogonal, and the use of high-order terms in the fitting process can affect the coefficients of low-order terms, resulting in higher coefficients from more terms. In this way, the number of significant figures can decrease during fitting with more high-order polynomials, leading to nonsignificant increases in precision. In this paper, further analysis and deduction were conducted, and we demonstrate that the results were not optimal either in terms of precision or in terms of coefficient magnitude. The effectiveness of our proposed method was proved using two conditions: (1) excessive coefficients and (2) suboptimal fitting precision from optimal quadric fitting surface.

## Table 1

Fitting result using aspheric polynomials for Descartes oval.

No. of terms | Traditional method | Improvement for departure | Improvement for optimal spherical fitting surface | ||
---|---|---|---|---|---|

Five | Seven | Eight | Eight | Eight | |

c (${\mathrm{mm}}^{-1}$) | $-0.5$ | $-0.5$ | $-0.5$ | $-0.5$ | $-0.5$ |

k | $-0.843716$ | $-0.843716$ | $-0.843716$ | $-0.843716$ | $-0.759214$ |

${A}_{1}$ (mm) | $2.57757\times {10}^{-1}$ | $2.69713\times {10}^{-1}$ | $2.73894\times {10}^{-1}$ | $2.71357\times {10}^{-1}$ | $4.70730\times {10}^{-1}$ |

${A}_{2}$ (mm) | $-5.3405\times {10}^{-2}$ | $-1.45314\times {10}^{-1}$ | $-2.19449\times {10}^{-1}$ | $-1.74068\times {10}^{-1}$ | $-7.2462\times {10}^{-2}$ |

${A}_{3}$ (mm) | $-4.33431\times {10}^{-1}$ | $-2.72898\times {10}^{-1}$ | $2.28076\times {10}^{-1}$ | $-8.1338\times {10}^{-2}$ | $8.9275\times {10}^{-2}$ |

${A}_{4}$ (mm) | $5.73476\times {10}^{-1}$ | $8.36713\times {10}^{-1}$ | $-8.81902\times {10}^{-1}$ | $1.89150\times {10}^{-1}$ | $-4.788\times {10}^{-3}$ |

${A}_{5}$ (mm) | $-3.44232\times {10}^{-1}$ | $-1.45812\times {10}^{0}$ | $1.834551\times {10}^{0}$ | $-2.36148\times {10}^{-1}$ | $-2.7192\times {10}^{-2}$ |

${A}_{6}$ (mm) | $1.19517\times {10}^{-1}$ | $-2.374638\times {10}^{0}$ | $-1.09050\times {10}^{-1}$ | $7.1639\times {10}^{-2}$ | |

${A}_{7}$ (mm) | $-4.25257\times {10}^{-1}$ | $1.622536\times {10}^{0}$ | $3.10880\times {10}^{-1}$ | $-5.6685\times {10}^{-2}$ | |

${A}_{8}$ (mm) | $-4.83067\times {10}^{-1}$ | $-1.70768\times {10}^{-1}$ | $2.2460\times {10}^{-2}$ | ||

RMS(mm) | $4.36995\times {10}^{-5}$ | $3.5701\times {10}^{-6}$ | $9.8508\times {10}^{-7}$ | $2.6815\times {10}^{-6}$ | $1.2038\times {10}^{-8}$ |

The data in the table show that, when there are eight polynomial terms, the coefficients are large, two orders of magnitude greater than the maximal value of the point of departure. These very large coefficients affect the system optimization process and so need to be decreased. According to the proposed method, taking the eight-term aspheric polynomials, e.g., the polynomials were projected into eight-dimensional vector space, and the eight vectors ${\mathbf{v}}_{1},{\mathbf{v}}_{2}\dots {\mathbf{v}}_{8}$ are here represented as matrix $\mathbf{M}$

The associated variables between polynomial coefficients variation and introduced error can be defined and controlled for optimization, facilitating reduction of coefficient size. The results showed that the coefficients decreased into the same level as in fitting of five-term polynomials, while the fitting precision was kept at a level similar to that of the fitting of seven-term polynomials.

Similarly, based on the deduction shown in Sec. 3, the Newton–Raphson iteration method was used to search for optimal fitting spherical surface. Results are shown in Table 1. The optimal fitting spherical surface is distinct from that produced using conventional methods. After determining the optimal quadric fitting surface using the Raphson method, the polynomial coefficients were solved, and the fitting error was dramatically decreased.

## 3.2.

### Comprehensive Validation Using an Aspheric Surface

The examples given above are experimental validation of the new method in terms of the departure part and the optimal quadric fitting surface part. The following example will perform comprehensive validation for the above two parts. The surface as shown in Fig. 2 is the measured contour from a fabricated lens used in a cell phone. It was fitted with the proposed method, and the results are as shown in Table 2. The results demonstrated that, due to the variation in the optimal quadric fitting surface, the vector corresponding to the point of departure shows high-dimensional components with lower values, resulting in more described surface information in the polynomials, so retaining high fitting precision. Associated variables between polynomials coefficients variation and introduced error were obtained, and this could significantly change coefficient range while keeping the error similar. The results also show that the proposed method is effective for a measured profile as well as the nominal starting values of a surface.

## Table 2

Fitting result using aspheric polynomials for an aspheric surface.

Number of terms | Traditional method | Improved method | |
---|---|---|---|

Five | Eight | Eight | |

c (${\mathrm{mm}}^{-1}$) | $-6.64404$ | $-6.64404$ | $-4.83301$ |

k | $-2.42266\times {10}^{1}$ | $-2.42266\times {10}^{1}$ | $-1.50273\times {10}^{0}$ |

${A}_{1}$ (mm) | $-1.22806\times {10}^{1}$ | $-2.89211\times {10}^{1}$ | $1.66517\times {10}^{-1}$ |

${A}_{2}$ (mm) | $4.92324\times {10}^{1}$ | $2.37090\times {10}^{2}$ | $7.30280\times {10}^{0}$ |

${A}_{3}$ (mm) | $-7.79851\times {10}^{1}$ | $-9.09388\times {10}^{2}$ | $-2.00049\times {10}^{1}$ |

${A}_{4}$ (mm) | $5.59258\times {10}^{1}$ | $1.97371\times {10}^{3}$ | $2.01305\times {10}^{1}$ |

${A}_{5}$ (mm) | $-1.50155\times {10}^{1}$ | $-2.54711\times {10}^{3}$ | $-3.30800\times {10}^{-2}$ |

${A}_{6}$ (mm) | $1.93555\times {10}^{3}$ | $-1.56877\times {10}^{1}$ | |

${A}_{7}$ (mm) | $-7.99407\times {10}^{2}$ | $1.16211\times {10}^{1}$ | |

${A}_{8}$ (mm) | $1.38363\times {10}^{2}$ | $-2.74648\times {10}^{0}$ | |

RMS (mm) | $2.39863\times {10}^{-2}$ | $7.51581\times {10}^{-3}$ | $6.21144\times {10}^{-4}$ |

## 3.3.

### Validation Using a Mild Aspheric Surface

The examples in Secs. 3.1 and 3.2 are extreme aspheric surfaces. In this section, the proposed method is also applied to the aspheric surface fitting of a mild profile, the contour from a imaging lens. The fitting results were shown in Fig. 3 and Table 3, which approved the efficiency of the method.

## Table 3

Fitting result using aspheric polynomials for an aspheric surface of mild profile.

Number of terms | Traditional method | Improved method | |
---|---|---|---|

Five | Eight | Eight | |

c (${\mathrm{mm}}^{-1}$) | $-1.00093$ | $-1.00093$ | $-0.737635$ |

k | $-2.18644\times {10}^{0}$ | $-2.18644\times {10}^{0}$ | $-6.60064\times {10}^{1}$ |

${A}_{1}$ (mm) | $1.37466\times {10}^{0}$ | $3.11691\times {10}^{0}$ | $-1.09348\times {10}^{0}$ |

${A}_{2}$ (mm) | $-4.45997\times {10}^{0}$ | $-2.19149\times {10}^{1}$ | $3.05223\times {10}^{0}$ |

${A}_{3}$ (mm) | $5.95359\times {10}^{0}$ | $7.45048\times {10}^{1}$ | $-4.91347\times {10}^{0}$ |

${A}_{4}$ (mm) | $-3.66472\times {10}^{0}$ | $-1.43932\times {10}^{2}$ | $3.33001\times {10}^{0}$ |

${A}_{5}$ (mm) | $8.48571\times {10}^{-1}$ | $1.65021\times {10}^{2}$ | $1.09858\times {10}^{0}$ |

${A}_{6}$ (mm) | $-1.11146\times {10}^{2}$ | $-3.27368\times {10}^{0}$ | |

${A}_{7}$ (mm) | $4.06120\times {10}^{1}$ | $1.94461\times {10}^{0}$ | |

${A}_{8}$ (mm) | $-6.21157\times {10}^{0}$ | $-3.93294\times {10}^{-1}$ | |

RMS (mm) | $3.22914\times {10}^{-3}$ | $1.20238\times {10}^{-3}$ | $1.874801\times {10}^{-4}$ |

## 4.

## Conclusions

While performing surface fitting with the least square method, polynomial coefficients often become far larger than maximal rise of arch of the fitted data. This causes difficulties in optical system design and optimization. In this paper, a projection from polynomials to vector was proposed such that the polynomial problem was transformed into a vector problem. According to the proposed projection principle, the mutual projection between polynomials and vector could determine the relationship between the vector and the polynomial coefficients, and further determine the relationship between introduced errors and polynomial coefficient variation. The polynomial coefficients were reduced based on the standard of minimal square sum. For the first time, the Newton–Raphson method was used to search for optimal fitting surface, which not only improved the fitting precision but also had better convergence than the Gauss–Newton method. Through fitting tests of various surface profiles, the proposed method was proven to be effective in terms of fitting error and the range of the coefficients. Thus, it is of significance in practical application. The proposed algorithm has been programmed with MATLAB and available for a Dynamic Data Exchange connection to optical design software.

## Acknowledgments

This work was supported in part by the grant from the National Natural Science Foundation of China (Nos. 61275003, 51327005, and 91420203).

## References

## Biography

**Xuemin Cheng** is an associate professor at the Graduate School at Shenzhen, Tsinghua University. She received her PhD in optical engineering from Beijing Institute of Technology, China, in 2004. She is author of 56 peer reviewed publications related to optical design, vision, and image quality assessment. Her current research interests include optical design and engineering, including Imaging system design, illumination system design, optical modeling and simulation, etc. She is a member of SPIE.