8 February 2012 Practical design and evaluation methods of omnidirectional vision sensors
Author Affiliations +
Optical Engineering, 51(1), 013005 (2012). doi:10.1117/1.OE.51.1.013005
Abstract
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
Ohte and Tsuzuki: Practical design and evaluation methods of omnidirectional vision sensors

1.

Introduction

In most computer-based imaging applications, such as automobile navigation systems, traffic safety monitoring systems, and remote accident surveillance systems, vision sensors must have a large field of view and high resolution. Beside, the images captured should be transmitted to remote locations for processing and generating final images. Some applications require high-speed (even real-time), video-rate processing, and that their sensor must be easy to manufacture. Omnidirectional vision sensors, consisting of rotationally symmetrical curved mirrors and digital imaging systems, are proving to be promising candidates to satisfy these requirements, because of recent advances in the development of image sensors and computers. Fig. 1 is an example of an omnidirectional vision sensor (omnidirectional camera).

Fig. 1

Principles of the omnidirectional vision sensor (omnidirectional camera).

OE_51_1_013005_f001.png

This sensor converts a three-dimensional viewing sphere in object space into a two-dimensional circle on the image plane. The sensor can view a field of 360 deg horizontally and 135 deg vertically, covering almost 80% of the entire viewing sphere.

It is important to understand image-forming characteristics, that is, how objects with the same solid angle in object space would appear in the image plane. For that, it is essential that all the objects located at various positions are well-focused at the image plane. Also, for computing pure perspective images, the imaging system must have a single viewpoint.

Omnidirectional vision sensors have been studied mainly for their application in visual navigation systems of mobile robots. Earlier workers proposed and tested many types of rotationally symmetrical curved mirrors, including conical mirrors,1 spherical mirrors,2,3 hyperboloidal mirrors,4 and paraboloidal mirrors,5,6 beside systems using two mirrors.7 Also, there are several reports on surveying the field or describing the theory of omnidirectional vision sensors.8,9,10 None of the available mirror forms satisfy all the required characteristics. Therefore, it is important to select the most suitable one among them, depending on the application.

The characteristics of curved mirrors were theoretically analyzed and their merits and demerits evaluated from a practical point of view. The evaluation was focused particularly on image-forming characteristics, position of virtual images, and size of blur spot images. In addition, a practical design method, which satisfies some of the required characteristics, is proposed. Based on the evaluation, the authors selected a spherical mirror for practical applications and developed several prototype models.

As reports dealing with virtual images and size of blur spots of curved mirrors are few, this paper will be of great use in the development of omnidirectional vision sensors.

2.

Evaluation Method of Omnidirectional Vision Sensors

2.1.

Optical System and Ray-Tracing Calculation

A model of the optical system generally used in an omnidirectional vision sensor is shown in Fig. 2. The three-dimensional surface shape of a curved mirror is generally expressed by the form z=f(x,y), but one needs to consider only a cross section expressed by the form z=f(x), because the mirror is rotationally symmetric around its principal axis.

Fig. 2

Design of the omnidirectional vision sensor.

OE_51_1_013005_f002.png

A light ray, incident on the curved mirror from an object T, is reflected at reflecting point P at coordinates (xp,zp) and is focused on an image sensor through the principal point of a lens system (0,m). This results in the formation of a circular image on the image sensor. From the standpoint of the lens system, one is essentially taking a photograph of a virtual image formed at position Q.

Let the slope of the mirror surface at P be β, the angle of incidence of light from object T be Ω, the viewing angle of the lens system θ, and the angle of incidence at the mirror, which is equal to the angle of reflection from the mirror, γ. Then, the following relationships are satisfied:

(1)

zp=f(xp),

(2)

tanβ=dz/dx,

(3)

γ=β+θ,

(4)

Ω=2γθ=2β+θ,

(5)

tanθ=xp/(m+zp).
The crossing point (viewpoint) V of incident light with the z-axis is given by

(6)

zv=zp+xp/tanΩ.
Under some special conditions, for a hyperboloidal mirror and a paraboloidal mirror, V is constant for the entire range of Ω; that is, a single viewpoint is realized.11 Usually, however, viewpoints are distributed in some range. Care must be taken if the object is located near the mirror. The reason for this, and the effect of the distribution of V, will be discussed in Section 4.2.

Recently, a unique method was proposed to realize a single viewpoint using a cone-shaped mirror.12 But this method can only be applied to limited vertical viewing angles and in low-resolution imaging systems. Thus this method cannot be used for the applications in the current study.

2.2.

Image-Forming Characteristics

In an omnidirectional vision sensor system, a three-dimensional viewing sphere in object space appears as a two-dimensional circle in the image plane. In this image, the vertical direction in the object space shows up in the radial direction, and the horizontal direction in the circumferential direction. The radial height on the image sensor, corresponding to object T, is h. In the lens system of a conventional camera, h is proportional to tanθ:

(7)

h=btanθ=b·x/(m+z).
The rate of change of h, with respect to Ω, can be calculated by

(8)

dh/dΩ=(dh/dx)/(dΩ/dx).
A graph of the relationship between hΩ and dh/dΩΩ shows the basic image characteristics of an omnidirectional vision sensor.

Fig. 3 shows the projection of 15×15-deg solid-angle objects located every 15 deg of incident angle onto the image plane. In the ideal case, an object with the same solid angle in the object space, for example, 15×15-deg would appear with the same shape in the image plane, regardless of its location in the object space. However, this is impossible in practice, because the images are distorted by projection.

Fig. 3

Three-dimensional viewing sphere in object space and its projection on the image plane, showing 15×15-deg solid-angle objects. 1×1-deg area is shown with width Δw and height Δh.

OE_51_1_013005_f003.png

For the sake of simplicity in calculation, small solid-angle objects are considered here. A 1×1-deg solid-angle appears as an image of radial height Δh, circumferential width Δw, area Δs, and width-to-height ratio (aspect ratio) Δw/Δh, which are given by

(9)

Δh=dh/dΩ(deg),

(10)

Δw=2πh/(360sinΩ),

(11)

Δs=Δh×Δw=dh/dΩ×2πh/(360sinΩ),

(12)

Δw/Δh=2πh/[(360sinΩ)(dh/dΩ)].
In the figures showing the image-forming characteristics, 1×1-deg images are enlarged to show virtual 15×15-deg solid-angle images.

2.3.

Virtual Image Points

Virtual image points, formed by the curved mirror, are distributed in a region close to the mirror. This distribution must fall within the depth of field of the imaging system; otherwise, the image points will be blurred. The distance between the virtual image points and the principal point of the lens is relatively short; so, a short-working-distance imaging system is necessary. It is not easy to ensure that all virtual image points are within the depth of field.

When a homocentric bundle of rays is incident on a curved mirror, the curved surface focuses the rays to form a virtual image. The virtual image point can be calculated from the curvature of the mirror and the angle of incident light at the reflecting point. In the curved mirror of an omnidirectional camera, the curvatures in the radial and circumferential directions are different. Also, the angle of incident light is different. Therefore, it is necessary to calculate image points for both directions.

The image focused by radial curvature of the mirror is called the tangential or meridional image, and the one focused by circumferential curvature the sagittal image. When the positions of these two images do not agree, astigmatism occurs, and when the image plane is not flat, field curvature occurs.

One can calculate the virtual image points using the following Eqs. (14) and (18) for tangential and sagittal image points, given in Brueggemann (1968).13 For a derivation of these equations, one must go back to Monk (1937).14

The radius of curvature Rt in the tangential direction is

(13)

Rt=[1+(dz/dx)2]3/2/(d2z/dx2).
For a spherical mirror, Rt is constant.

When p is the distance between points T and P, and qt is the distance between P and the virtual image point Qt, one has the following:

(14)

1/p+1/qt=2/(Rtcosγ).
The coordinates of the virtual image point Qt are

(15)

xqt=xp+qtsinθ,

(16)

zqt=zp+qtcosθ.
The sagittal image plane is perpendicular to the tangential plane, including the normal to the curved mirror. The radius of curvature Rs in the sagittal direction is given by the following:

(17)

Rs=x[1+(dz/dx)2]1/2/(dz/dx).
For a spherical mirror, Rs is also constant.

If the distance between P and the position of the virtual image Qs is qs, it follows that

(18)

1/p+1/qs=2cosγ/Rs.
The coordinates of the virtual image point Qs are

(19)

xqs=xp+qssinθ,

(20)

zqs=zp+qscosθ.
Calculation methods for the depth of field and the blur-spot size are given in the Appendix.

3.

Design and Evaluation Results

3.1.

Design Procedure

Two kinds of lens systems are usually used in omnidirectional vision sensors. If one uses an orthographic lens system, such as a telecentric camera lens, Θ is equivalent to 0 deg, and it is rather easy to design curved mirrors by mathematical analysis. However, in this case, the mirror size is limited by the lens size.

Perspective lenses are widely used for most digital imaging equipment, such as digital cameras. Omnidirectional vision sensors using perspective lenses are desirable, because such lenses are widely available and do not limit the mirror size. In this case, however, it is not easy to design mirror-shape functions that satisfy the desired characteristics.

Here, the characteristics of mirrors with the most common shape-defining functions are simulated, namely, spherical and hyperboloidal mirrors. Also, a practical method is proposed here to define mirror shapes that satisfy the requirements. The equidistant mirror will be shown here as a practical example.

The mirror-shape functions used in the calculations, as also the functions derived from them, are shown in Table 1. The parameters were obtained by trial and error so that Ω=135deg and Θ=12deg at x=1. The values obtained for each mirror are also shown in Table 1.

Table 1

Functions Defining Mirror Shapes*

Spherical mirror (SPH)Equidistant mirror (E-DI)Single viewpoint mirror (SVP) (hyperboloidal)
x2+(z−A)2=A2Polynominal approximatex2/A2−[(z+B)/B]2=−1
z=A−(A2−x2)1/2z=Ax2+Bx4+Cx6z=−B+(B/A)/(A2+x2)1/2
dz/dx=x/(A2−x2)1/2dz/dx=2Ax+4Bx3+6Cx5dz/dx=(B/A)x/(A2+x2)1/2
d2z/dx2=A2/(A2−x2)3/2d2z/dx2=2A+12Bx2+30Cx4d2z/dx2=AB/(A2+x2)3/2
A=1.138A=0.6409B=0.1550C=−0.01A=0.7407B=1.698C2=A2+B2=1.8522
m=4.110m=3.919m=B+C=3.550

*

The parameters are for Ω=135  deg and Θ=12  deg at x=1.

3.2.

Calculation of Ray Traces and Image-Forming Characteristics

Calculated characteristics of three mirrors are shown in Figs. 4Fig. 5Fig. 6Fig. 78. Fig. 4 shows ray traces, virtual image points, and depth of field. For both tangential and sagittal images, the distance of the object from the mirror surface was assumed to be infinity and 5 times the mirror radius. The depth of field will be explained later.

Fig. 4

Ray traces, virtual image points, and depth of field of curved mirrors: T, T5M, S and S5M. Tangential and sagittal images of an object whose distance from the mirror surface is assumed to be infinity and 5 times the mirror radius. M: Mirror shape. DF4: Depth of field when F-number of the lens is 4.

OE_51_1_013005_f004.png

Fig. 5

Radial image height h and rate of change dh/dΩ. SPH: Spherical mirror, E-DI: Equidistant mirror, SVP: Single viewpoint mirror (hyperboloidal mirror).

OE_51_1_013005_f005.png

Fig. 6

Distribution of viewpoints.

OE_51_1_013005_f006.png

Fig. 7

Image area Δs=Δw×Δh and width-to-height ratio Δw/Δh.

OE_51_1_013005_f007.png

Fig. 8

Image-forming characteristics: (a) spherical mirror, (b) equidistant mirror, and (c) image-forming characteristics.

OE_51_1_013005_f008.png

Fig. 5 shows the basic image characteristics of an omnidirectional vision sensor, namely the radial image height h versus Ω and rate of change dh/dΩ versus Ω. The equidistant (E-DI) mirror shows the linear relationship in h versus Ω, and dh/dΩ is constant over the entire Ω range. For spherical (SPH) mirror, dh/dΩ decreases with increasing Ω, and for the single viewpoint (SVP) mirror, dh/dΩ increases with increasing Ω. One can see that the image characteristics of the SPH and SVP mirrors differ substantially.

Fig. 6 shows the distribution of viewpoints V. The SVP mirror is a hyperboloidal mirror arranged in a special optical configuration, and for this mirror, V is constant over the entire range of Ω. Viewpoints for other mirrors are distributed in some range, and for SPH mirror, the distribution is the largest. Therefore, using data for SPH mirror, one can estimate the maximum influence of the viewpoint distribution.

Fig. 7 shows the image area Δs=Δw/Δh and the width-to-height ratio Δw/Δh of the image. For the SVP mirror, Δw/Δh is constant, that is, the image shape of a small square object always remains square, but the change in area Δs is very large over the entire range of Ω. For the SPH mirror, Δs decreases slightly and Δw/Δh increases as Ω increases. For E-DI mirrors, both Δs and Δw/Δh increase as Ω increases.

In Fig. 8, the solid line shows the image-forming characteristics, in which each rectangle represents the virtual image of a 15×15-deg object. From this figure, which is a geometrical expression of Fig. 7, one can visually understand the image-forming characteristics of each mirror.

3.3.

Estimation of Depth of Field and Blur-Spot Size

The depth of field shown in Fig. 4 was calculated using Eq. (33) given in the Appendix and the parameters of a prototype omnidirectional vision sensor. For the image sensor, 2N=1000pixels, the total number of pixels πN2=0.785×106pixels, δ=5.2μm, and 2H=5.2mm. The maximum radius of the mirror M is 43.9 mm at Ω=135deg and Θ=12deg. The corresponding focal length of the lens is f=11.6mm. When F=4, L=f2/Fδ is 6412 mm. The distances between the center of the virtual images and the principal point of the lens are somewhat different for each mirror. With the spherical mirror, L0=212mm and the depth of field is 2L02/L=13.7mm, which corresponds to 0.31M. For equidistant mirrors, L0=205mm; for hyperboloidal mirror, L0=190mm; their depths of field correspond to 0.3M and 0.25M, respectively. These bands are shown in Fig. 4.

The spread of tangential image points for spherical mirror is very small, and all image points remain within the depth of field. For equidistant and hyperboloidal mirrors, the spread is large. As regards the spread of the sagittal image point, it is large for all three mirrors, and the image points are not within the depth of field. To evaluate the effect of this, further detailed discussion is necessary. One must calculate the blur-spot size expressed in pixel size units in the image plane as well as in the solid view angle in the object sphere, because each pixels unit has a different solid-angle, restrictively.

The blur spot sizes calculated from Figs. 9Fig. 10 to 11, by using the Eqs. (29), and (34) to (37) in the Appendix, are shown in various forms of expression. To evaluate blur-spot size, it is important to select L0, where the distance between lens and image point is just in focus. The best L0 for each mirror was calculated by trial and error to minimize overall blur size in the solid view angle.

Fig. 9

Tangential and sagittal blur-spot sizes. SPH: Spherical mirror, L0=212mm. E-DI: Equidistant mirror, L0=205mm. SVP: Single viewpoint mirror, L0=190mm, L0: where distance between the lens and image point is just in focus. (a) Blur-spot size Δ in pixel size unit, (b) blur-spot size Δ in angle (deg), and (c) Blur-spot area size in solid angle (deg)2.

OE_51_1_013005_f009.png

Fig. 10

Blur-spot size in the image-forming characteristics. Blur-spot size is enlarged ×10 and ×10 in radial and sagittal directions, respectively. Image-forming characteristics showing 15×15-deg square. (a) Spherical mirror, (b) equidistant mirror, and (c) single-viewpoint mirror.

OE_51_1_013005_f010.png

Fig. 11

Blur-spot size in the object space. Object spaces showing 15×15-deg squares. Blur-spot-size is enlarged ×10 and ×10 in radial and sagittal directions. (a) Spherical mirror, (b) equidistant mirror, and (c) single-viewpointmirror (hyperboloidal mirror).

OE_51_1_013005_f011.png

Fig. 9(a) shows the calculated results of tangential and sagittal blur-spot size Δ in pixel size units. In fact, the sign of Δ cannot be recognized, but still the calculated results are kept for convenience. Here, it was assumed that the diameter of the image plane circle was 1000 pixels. Blur-spot size of less than 1 pixel could not be recognized. In Fig. 10, the blur-spot size for each mirror is shown geometrically with image-forming characteristics. Blur sizes are enlarged ×10 by ×10. Image-forming characteristics show 15×15-deg square objects, but the images are altogether different. So, the effect of blur image spots is also different.

To evaluate this effect, one must calculate the blur-spot size expressed in solid view angle. Fig. 9(b) shows the calculated blur-spot size in angle of object space. In Fig. 10, object spaces show 15×15-deg square. Blur-spot sizes are enlarged ×10 and ×10. Fig. 9(c) shows blur-spot size in solid angles (square degrees).

The system of this study covers Ω=0 to 135 deg, which means that the coverage of the entire space was 2π(1cosΩ)/4π=0.8356, the entire space being 4π sr(steradian)=41,253squaredegrees, and the total pixel number in the project plane π×5002=0.735·106pixels. So, the average solid angle for 1 pixel is 41253×0.8356/0.735·106=0.048sqdeg=(0.21deg)2. This value is schematically shown in Fig. 11. From Figs. 4 to 11, one can understand the characteristics of each mirror. Detailed discussion of each mirror follows.

3.4.

Spherical Mirror

Spherical mirrors are rather easy to manufacture. An extremely precise spherical mirror can be made using grinding and polishing techniques similar to those employed in making lenses. Examples of omnidirectional vision sensors using spherical mirrors will be shown later in section 5.

In Fig. 5, dh/dΩ of the spherical mirror decreases as the angle of incidence Ω increases, and that explains why this mirror can be used at large Ω. The image-forming characteristics of a virtual 15×15-deg object are shown in Fig. 10(a). At angles of depression (Ω<90deg), square objects appear almost as such with approximately equal area, whereas at angles of elevation (Ω>90deg), they appear with radially compressed shapes and decreased areas.

To cover a large field of view, image information must be uniformly distributed. Then, at any part of the image plane, one can reconstruct the desired image by computer processing. A spherical mirror with an orthographic imaging system shows an ideal equiareal image characteristics.15 With the spherical mirror, in a perspective imaging system, the image area Δs is much smaller at Ω=135deg as compared to the area at Ω=0deg. This is due to the lens system. Fig. 12 shows the calculated results for the change in image area Δs=Δw×Δh when Θ is changed. If one uses a long-focal-length lens, Θ becomes small, and almost equiareal image characteristics can be obtained. With Θ=12deg system, Δs of the spherical mirror decreases to 70% at Ω=135deg; so, image quality in this area decreases, even if it is just in focus with this area.

Fig. 12

Influence of imaging lens system (spherical mirror).

OE_51_1_013005_f012.png

Fig. 4(a) shows the virtual image points and depth of field for the spherical mirror. For tangential and sagittal images, the distance of the object from the mirror surface was assumed to be infinity and 5 times the mirror radius, respectively. The spread of tangential image points for the spherical mirror is very small, and all the image points were within the depth of field. On the other hand, the sagittal image points at the peripheral area were not within the depth-of-field. To estimate the influence of this situation, one needs to go into the details. In Figs. 9Fig. 1011, selecting the best L0, one can almost make the blur size less than 1 pixel in the tangential direction. This makes the blur spot area very small.

3.5.

Equidistant Mirror

In the circular image plane, the radial height h of the mirror is proportional to the incident light angle Ω. This is the same as the constant-angular-resolution mirror.15

In the case of an orthographic system, in Eqs. (4) and (7), Ω=2β and h=x. If β=Ax, then h is proportional to Ω, and an equidistant mirror can be realized. The mirror shape is given by

(21)

tanβ=tanAx=dz/dx,

(22)

z=(1/A)ln(cosAx).

In the case of a perspective system, if β is proportional to x/(m+z), then h is proportional to Ω, and an equidistant mirror may be realized. The mirror-shape may be obtained by solving tan[x/(m+z)]=dz/dx.

However, it is difficult to obtain the solution for this equation by mathematical analysis. A relatively simple method to realize a curved mirror with the desired characteristics is to use a polynomial expression. An almost equidistant mirror can be realized using a three-term polynomial. At the peripheral area in Fig. 10(b), the images were enlarged in the circumferential direction; so, the image area became larger. But, the circular image of this mirror seemed natural.

In this mirror, the distributions of tangential and sagittal images were relatively large. So, the blur size in entire incident light angle could not be tolerated. To reduce this effect, one must use larger lens F-number. If this problem is solved, this mirror may be used in a large-incident-light angle system.

3.6.

Single-Viewpoint Mirror (Hyperboloidal Mirror)

In Eq.(6), if zv=zp+xp/tanΩ is constant, a single viewpoint can be realized. In the case of an orthographic system, a paraboloidal mirror satisfies this condition.5,11 This mirror also satisfies the constant aspect ratio condition, that is, in Eq. (12), Δw/Δh=2πh/[(360sinΩ)(dh/dΩ)] is 1.

In the case of the perspective system, with a hyperboloidal mirror, if the principal point of the lens is set at one of the focal points, incident light from various angles Ω would be projected to the other focal point. This single viewpoint is the main advantage of a hyperboloidal mirror.

In the hyperboloidal mirror case, dh/dΩ becomes larger as the incident angle Ω increases. This restricts the usage of hyperboloidal mirror for small values of Ω. With this mirror, the image shape of a small square object remains almost a square as the incident angle Ω increases, but the area changes rapidly, as shown schematically in Fig. 10(c). In the peripheral part of the image plane, the image area is very large, but it becomes very small at the central part. Therefore, at large values of Ω, images of large rectangular objects appear as trapezoids.

With the hyperboloidal mirror, the tangential and sagittal image points appeared at the same positions and thus no astigmatism resulted. But their distribution was very large, and it was impossible to focus the entire image within the depth of field. In Figs. 9Fig. 1011, L0 was set to focus peripheral area to minimize blur-spot size. But in central area, the blur size was very large; therefore, a hyperboloidal mirror is not suitable for use at a large field of view.

4.

Image-Processing Software

4.1.

Principles of Image Processing

The circular image produced by a spherical mirror is distorted, and therefore, it is difficult for any human observer to view the image. To correct the image, the authors developed a new image-processing software that can transform a circular image to any desired shape, such as a wide rectangular panorama image, a conventional rectangular image, or a compensated circular image.

Fig. 13 illustrates the principles of this image processing. The processing assumes a virtual screen on which images are projected with a virtual projector. The virtual projector is equivalent to the imaging system, including the mirror described above. Light rays from the virtual projector are formed by reverse ray tracing of the rays focused on the image sensor. The virtual screen is equivalent to the processed image itself. One can freely change its size, and spatial relationship relative to the projector. Various image effects, such as distant view, wide-angle view, and super-wide-angle view, are possible. In addition, one can produce panning and tilting effects by moving the screen to the left or right, or up and down. The correction applied is determined by the shape of the virtual screen. Two examples of virtual screen are described below.

Fig. 13

Principles of image processing.

OE_51_1_013005_f013.png

One is a “cylindrical-surface virtual screen.” When this cylindrical screen is so placed as to surround a light source, it can display an image with a horizontal viewing angle of up to 360 deg. Because the actual display is on a flat plane, the cylindrical surface must be transformed to form a flat surface.

Another example is the “spherical-surface virtual screen.” As this screen covers the entire field of view, the horizontal viewing angle is 360 deg, and the vertical viewing angle 0 to 135 deg. A flat image is obtained by transforming the spherical surface. The radial height of the displayed image is proportional to the incident light angle.

All image processing operations were executed through software on a personal computer. With a commercially available computer, using a high-speed processor (3.3 GHz) equipped with a USB2.0 interface, one could acquire 1000×1000-pixel images at 15 fps and execute processing in real time.

4.2.

Influence of Viewpoint Distribution of Incident Light

In the virtual projector system, it was assumed that the light was projected from a single viewpoint V0. With curved mirrors, except for the single viewpoint mirror, the viewpoint V of incident light varied with the incident light angle Ω. When an object was located sufficiently far away, there was no error caused by the viewpoint distribution in the image plane. On the other hand, when the object was close by, some problems, such as parallax or angle error, occurred as discussed below. When the object T was located at a distance of n times the mirror radius M, its influence on the image was examined. If it is assumed that V is located at V0, there will be an angle error ε. In Fig. 14:

(23)

tanε×[n+x/sinΩ+vcosΩ]=vsinΩ,
where v is the distance between V and V0.

Fig. 14

Influence of viewpoint distribution of incident light.

OE_51_1_013005_f014.png

The number of pixels equivalent to the angle error ε can be calculated from dh/dΩ and N. Assuming 2N=1000pixels, the calculated error for the spherical mirror, which showed the largest viewpoint distribution, is shown in Fig. 15. By selecting a proper V0 position, one can minimize the error. If the object is located at a distance of 40M (i.e., n=40), which corresponds to 180 cm for a mirror with M=43.9mm, the error is equivalent to ±1pixel. If n=7, that is to say, at a distance of about 30 cm, the error becomes ±5pixels.

Fig. 15

Effect of viewpoint distribution (spherical mirror). Error is converted into pixel number (2N=1000pixels).

OE_51_1_013005_f015.png

5.

Prototype Development

5.1.

Development of Spherical Mirror

Experimental models of hyperboloidal and spherical mirrors were constructed and tested. The hyperboloidal mirror was precisely cut by machine from an aluminum block. The maximum viewing angle of the mirror was 105 deg at diameter 60 mm. Focusing the entire area of the 1 M pixel image sensor was found to be very difficult.

The spherical mirror was made as follows. The maximum viewing angle of the mirror was 135 deg at diameter 88 mm. With a 1-M pixel image sensor, good focusing performance was confirmed, as predicted theoretically. The main advantages of spherical mirror are its good focusing performance and ease of manufacture. Therefore, ignoring its disadvantages, spherical mirror was adopted for practical use in the omnidirectional vision sensor of this study, which was required to image a large field of view with a high resolution.

A plastic hemispherical mirror was developed using the vacuum method. A plastic substrate was heated and softened and then clamped between a base plate and another plate having a circular hole. Then, a hemisphere was formed by vacuum suction. The surface of the plastic hemisphere was plated with metal using a method similar to that used in making automobile parts. The mirror thus produced had a high reflectance, sufficient strength, and low weight. The diameter of the hemisphere was 100 mm.

5.2.

Spherical Mirror Omnidirectional Vision Sensor

Three prototype models were developed. The first model, shown in Fig. 16(a), had a structure in which the spherical mirror was supported by a plastic transparent rod. The mirror was not covered by any material so that good images could be obtained. The rod was cut to give a trapezoidal crosssection so that the side of the rod does not show up in the images. The photographs taken by this model and processed with the authors’ software are shown in Fig. 17.

Fig. 16

Prototypes of spherical mirror omnidirectional cameras.

OE_51_1_013005_f016.png

Fig. 17

Processed images: (a) circular image, (b) panoramic image (360 deg), and (c) zooming effect. (Isuien Garden, Nara, Japan.)

OE_51_1_013005_f017.png

The second model, shown in Fig. 16(b), was developed for outdoor use. The mirror was supported by a water-proof, dust-proof, transparent cover. The mirror was placed in a spherical glove to prevent internal reflections around the cover.

The third model, shown in Fig. 16(c), was developed for use with a digital still camera or video camera for outdoor photographs. This model was compact and portable. In some applications, this camera can be used upside down to obtain superior image characteristics at angles of elevation (Ω>90deg).

6.

Conclusions

To assess the performance of omnidirectional vision sensors, the authors evaluated the image-forming characteristics and the virtual image point, besides estimating the blur-spot-size. Also, a practical design method is proposed to satisfy the required characteristics.

The evaluation method presented in this paper is also useful for designing many other omnidirectional mirrors, such as equiareal, paraboloidal, or ellipsoidal mirrors.

A conventional perspective lens system was adopted for the camera. It is possible to overcome many drawbacks of this system by using a specially designed lens system, for example, a telecentric camera lens for a spherical mirror. Therefore, it is important to consider the overall system design, including the lens and the mirror, for developing better omnidirectional cameras.

Appendices

Appendix:

Calculation of the Blur-Spot Size and the Depth of field

The size of the blur spot of a virtual image in the object plane is calculated by simply using the triangles shown in Fig. 18 with the help of the lens equation. The lens has a focal length f, aperture diameter D and f-stop number F. Their relationship is given by

(24)

F=f/D.
The image sensor plane is located at a distance of b0 from the lens and the object at a distance of L0 from the lens, on the other side of the image sensor. If the object image is precisely in focus, the thin-lens equation will be as follows:

(25)

1/L0+1/b0=1/f.
If the object is moved to distance L from the lens, the object image is at b from the lens. This gives

(26)

1/L+1/b=1/f.
At the image sensor plane, the object’s image is out of focus and blurred. The size of the blur Δ is calculated using triangles as follows:

(27)

Δ/D=(b0b)/b.

Fig. 18

Calculation of blur-spot size.

OE_51_1_013005_f018.png

From Eqs. (24)(25)(26)–(27), Δ is expressed as follows:

(28)

Δ=fD(LL0)/[(L0f)L]=(f2/F)(LL0)/[(L0f)L],

(29)

ifL0f,Δ=(f2/F)(LL0)/(L0L).

Using Eq. (29), one can calculate the distributions of Δ. If the blurred image size |Δ| is smaller than |δ|, humans cannot recognize the blur and hence it can be tolerated. Where δ is the so-called “permissible circle of confusion,” one must assume that δ is the size of 1 pixel of the image sensor.

When Δ is +δ or δ, L is Lf, or Lr, respectively, and they are given as follows.

(30)

Lf=f2L0/(f2δFL0)

(31)

andLr=f2L0/(f2+δFL0).

The tolerable region along the optical axis is the depth of field and given as follows:

(32)

LfLr=f2L0/(f2δFL0)f2L0/(f2+δFL0)=f2L0(2δFL0)/(f2δFL0)(f2+δFL0).Assuming thatL=f2/δF,

(33)

andLL0,thenLfLr2δFL02/f2=2L02/L.
Here, L is the so-called “hyperfocal distance.”

In omnidirectional mirrors, the virtual image points of tangential and sagittal images are different. In both cases, the blurred images are oval, but their sizes are usually different. For

(34)

a tangential virtual image,L=Lt=m+zqt,

(35)

and for a sagittal image,L=Ls=m+zqs.
At L=L0, Δ=0; so, selection of L0 is important to minimize the distribution of Δ. Using Eq. (29), one can calculate the size of the blurred image in pixel unit in the circular image plane. But pixel units have different solid angles. Blur size, expressed in a solid view angle, can be calculated by using Eqs. (36) and (37), respectively.

(36)

Δt/Δh=(f2/F)·(LtL0)/[(LtL0)(dh/dΩ)].

(37)

Δs/Δw=(f2/F)·(LtL0)/[(LtL0)(2πh/360sinΩ)].
Usually solid angles are expressed in sr (steradian), but in this paper they are expressed in square degrees.

Acknowledgments

The authors express their gratitude to Takashi Hirose and Koji Mori, a former president and manager of Rosel Electronics Corporation, respectively, for their support in this research work. They also thank K.P. Thompson, ORA vice president of optical engineering services, who suggested the book, Conic Mirrors by Brueggemann, for their reference.

References

1. 

Y. YagiS. Kawato, “Panorama scene analysis with conic projection,” in Proc. IEEE Intl. Workshop on Intelligent Robots Syst. IROS ’90, pp. 181–187 (1990).Google Scholar

2. 

J. HongX. TanB. PinetteR. WeissE. M. Riseman, “Image-based homing,” in Proc. 1991 IEEE Intl. Conf. on Robotics and Automation, pp. 620–625 (1991).Google Scholar

3. 

A. OhteO. TsuzukiK. Mori, “A practical spherical mirror omnidirectional camera,” in Proc. ROSE 2005 IEEE Intl. Workshop Robotic Sensors Environ, pp. 8–13 (2005).http://www1.parkcity.ne.jp/ohtephoto/QA/05ROSE2005FinalManuscript2.pdfGoogle Scholar

4. 

K. YamazawaY. YagiM. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proc. IEEE/RSJ Conf. Intelligent Robots Syst., pp. 1029–1034 (1993).Google Scholar

5. 

S. K. Nayar, “Catadioptric omnidirectional camera,” in Proc. IEEE Conf. Comp. Vision Pattern Recog., pp. 482–488 (1997).Google Scholar

6. 

H. Ishiguro, “Development of low-cost omnidirectional vision sensors and their applications,” in Proc. Intl. Conf. on Information Systems, Analysis and Synthesis, pp. 433–439 (1998).Google Scholar

7. 

S. K. NayarV. Peri, “Folded catadioptric cameras,” in Proc. IEEE Conf. Comp.Vision Pattern Recog., Vol. 2, 217–223 (1999).Google Scholar

8. 

A. M. BrucksteinT. J. Richardson, “Omniview cameras with curved surface mirrors,” Bell Lab Tech Memo, Bell Laboratories, pp. 1–6 (1996) (0-7695-0704-2/00, 2000 IEEE).Google Scholar

9. 

S. BakerS. K. Nayar, “A theory of catadioptric image formation,” in Proc. IEEE 6th Intl. Conf. Comp. Vision, pp. 35–42 (1998).Google Scholar

10. 

K. DaniilidisC. Geyer, “Omnidirectional vision: theory and algorithms,” in Proc. IEEE 15th Intl. Conf.Pattern Recog., pp. 89–96 (2000).Google Scholar

11. 

S. DerrienK. Konolige, “Approximating a single viewpoint in panoramic imaging devices,” in Proc. 2000 IEEE Intl. Conf. Robotics Automation, pp. 3931–3938 (2000).Google Scholar

12. 

S.-S. LinR. Bajcsy, “Single-viewpoint, catadioptric cone mirror omnidirectional imaging theory and analysis,” J. Opt. Soc. Am. A 23, 2997–3015 (2006).JOAOD61084-7529http://dx.doi.org/10.1364/JOSAA.23.002997Google Scholar

13. 

H. P. Brueggemann, Conic Mirrors, The Focal Press, London and New York (1968).Google Scholar

14. 

G. S. Monk, Light, Principles and Experiments, McGraw-Hill, New York (1937).Google Scholar

15. 

R. A. HicksR. K. Perline, “Equi-areal catadioptric sensors,” in Proc. Third Workshop on Omnidirectional Vision, pp. 13–18 (2002).Google Scholar

Biography

OE_51_1_013005_d001.png

Akira Ohte received his BS and Dr. Eng. degrees in applied physics from Tokyo University in 1961 and 1980, respectively. In 1961, he joined Yokogawa Electric Works Ltd., Tokyo, Japan, where, as a research engineer, he developed transducers, sensors, analog control systems, and precision NQR thermometers. He also promoted R&D of medical instruments and coherent optical measuring instruments. In 1990, he was appointed division manager of the Electronic Devices Division, and in 1996, vice president and director of corporate R&D for the Yokogawa Electric Corporation. In 1998, he was appointed vice president of Yokogawa Research Institute Corporation. Since retirement from corporate management, he acts as a research consultant. He is a Life Fellow of the IEEE, a Fellow of the SICE Japan, and Member of SPIE.

OE_51_1_013005_d002.png

Osamu Tsuzuki received his BE degree in precision mechanics from Chuo University in 1972 and the Diplom-Ingenieur degree in construction and manufacturing from Technische Universität Berlin, Germany, in 1978. His research interests include numerical analysis, computational mechanics, and artificial intelligence. He has served as director of the Development Department at Rosel Electronics Corporation, Tokyo, Japan, since 1988, where he works on applications of software computing techniques.

Akira Ohte, Osamu Tsuzuki, "Practical design and evaluation methods of omnidirectional vision sensors," Optical Engineering 51(1), 013005 (8 February 2012). http://dx.doi.org/10.1117/1.OE.51.1.013005
Submission: Received ; Accepted
JOURNAL ARTICLE
11 PAGES


SHARE
KEYWORDS
Mirrors

Spherical lenses

Sensors

Imaging systems

Image sensors

Solids

Image processing

Back to Top