26 April 2012 Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography
Author Affiliations +
J. of Electronic Imaging, 21(2), 023002 (2012). doi:10.1117/1.JEI.21.2.023002
Abstract
A simple and practical technique is presented for creating fine three-dimensional (3D) images with polygon-based computer-generated holograms. The polygon-based method is a technique for computing the optical wave-field of virtual 3D scenes given by a numerical model. The presented method takes less computation time than common point-source methods and produces fine spatial 3D images of deep 3D scenes that convey a strong sensation of depth, unlike conventional 3D systems providing only binocular disparity. However, smooth surfaces cannot be reconstructed using the presented method because the surfaces are approximated by planar polygons. This problem is resolved by introducing a simple rendering technique that is almost the same as that in common computer graphics, since the polygon-based method has similarity to rendering techniques in computer graphics. Two actual computer holograms are presented to verify and demonstrate the proposed technique. One is a hologram of a live face whose shape is measured using a 3D laser scanner that outputs polygon-mesh data. The other is for a scene including the moon. Both are created employing the proposed rendering techniques of the texture mapping of real photographs and smooth shading.
Matsushima, Nishi, and Nakahara: Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography

1.

Introduction

In classical holography, the object wave of a real object is recorded on light-sensitive films employing optical interference with a reference wave. The object wave is optically reconstructed through diffraction by the fringe pattern. Therefore, real existing objects are required to create a three-dimensional (3D) image in classical holography. For a long time, it was not possible to create fine synthetic holograms for virtual 3D scenes such as those in modern computer graphics (CG).

Recently, the development of computer technologies and new algorithms have made it possible to create brilliant synthetic holograms.12.3.4.5.6 These holograms, whose dimensions can exceed 4 billion pixels, optically reconstruct true spatial images that give continuous motion parallax both in horizontal and vertical directions without any additional equipment such as polarizing eyeglasses. The reconstructed spatial images provide almost all depth cues such as dispersion, accommodation, occlusion, and convergence. Thus, the computer holograms give viewers a strong sensation of depth that has not been possible for conventional 3D systems providing only binocular disparity. Unfortunately, these computer holograms cannot be reconstructed by current video devices such as liquid crystal displays because of their extremely high definition. However, the high-definition holograms presage the great future of holographic 3D displays beyond Super Hi-Vision.

The synthetic fringe of a high-definition computer hologram is computed using a new algorithm referred to as the polygon-based method7 instead of using conventional point-based methods.8,9 In point-based methods, object surfaces are regarded as being covered with many point sources of light, whereas object surfaces are approximated by planar polygons regarded as surface sources of light in the polygon-based method. The point-based methods are simple but commonly time-consuming. It is almost impossible to create full-parallax high-definition computer holograms of occluded 3D scenes using point-based methods, even though many techniques have been proposed to accelerate computation.1011.12.13.14.15. The polygon-based method remarkably speeds up the computation of the synthetic fringe because far fewer polygons than point sources are needed to form a surface. Thus, some variations of the polygon-based method have been proposed for computing the fringe pattern even more quickly.16,17 In polygon-based computer holography, the silhouette method is also used for light-shielding behind an object.1,18,19

A disadvantage of the polygon-based method is that techniques have not been established for photorealistic reconstruction in high-definition holography. Some early high-definition holograms were created employing a basic diffuser model that simply corresponds to the flat shading of CG. As a result, the reconstructed surface is not a smooth curved surface but an angular faceted surface, and the borders of polygons are clearly perceived in the reconstruction. This problem is peculiar to polygon-based methods. In the early development of CG, the rendering of polygon-mesh objects suffered the same problem, but they have been resolved with simple techniques.

Photorealistic reconstruction of computer holograms has been discussed using a generic theoretical model in the literature.20 However, although the discussion is based on a generic model, fringe patterns are eventually computed with the point-based method. In addition, the surface patches used in the study are assumed to be parallel to the hologram. Thus, it is difficult to apply this model to our polygon-based holograms in that the objects are composed of slanted patches. In this paper, we present a simple technique for the photorealistic reconstruction of a diffuse surface in high-definition computer holography. The technique makes use of the similarity of the polygon-based method to conventional CG. The similarity means that the proposed technique is simple, fast and practical. Here, we use the term “rendering” to express computation of the fringe pattern and creation of 3D images by computer holograms because of the similarity to CG. Since the polygon-based method is also a wave-oriented method unlike the point-based method, the rendering technique is referred to as “wave-field rendering” in this paper.

Two actual high-definition holograms are created to verify the techniques proposed in this paper. One is a 3D portrait; i.e., a hologram that reconstructs the live face of a girl. The polygon-mesh of the live face is measured using a 3D laser scanner. A photograph of the face is texture-mapped on the polygon-mesh with smooth shading. The other is for a scene of the moon floating in a starry sky. A real astrophotograph of the moon is mapped onto a polygon-mesh sphere. Both holograms comprise more than 8 billion pixels and give a viewing angle of more than 45° in the horizontal and 36° in the vertical, and thus reconstruct almost all depth cues. As a result, these holograms can reconstruct fine true spatial 3D images that give a strong sensation of depth to viewers.

2.

Summary of the Principle of the Polygon-Based Method

The polygon-based method is briefly summarized for convenience of explanation in this section. In the point-based method, spherical waves emitted from point sources are computed and superposed in the hologram plane, as illustrated in Fig. 1(a). Considerably high surface density of the point sources, such as 103104points/mm2, is required to create a smooth surface with this method. The computation time is proportional to the product of the number of point sources and the number of pixels in the hologram. Since this product is gigantic number for high-definition holograms, the computation commonly takes a ridiculously long time. The polygon-based method is illustrated in Fig. 1(b). Object surfaces are composed of many polygons in this method. Each polygon is regarded as a surface source of light whose shape is polygonal. Wave-fields emitted from the slanted polygons are computed by numerical methods based on wave optics. Even though the individual computation time for a polygon is longer than that for a point source, the total computation time using the polygon-based method is remarkably shorter than that using point-based methods, because far fewer polygons than point sources are required to form a surface.

Fig. 1

Schematic comparison of the point-based method (a) and polygon-based method (b).

JEI_21_2_023002_f001.png

2.1.

Theoretical Model of Surface Sources of Light

We can see real objects illuminated by a light source because the object surfaces scatter the light, as shown in Fig. 2(a). If we suppose that the surface is composed of polygons and we focus on one of the polygons, the polygon can be regarded as a planar distribution of optical intensity in 3D space. This distribution of light is similar to that in a slanted aperture irradiated by a plane wave, as in (b). The aperture has the same shape and slant as the polygon. However, a simple polygonal aperture may not behave as if it is a surface source of light, because the aperture size is usually too large to diffract the incident light. As a result, the light passing through the aperture does not sufficiently diffuse and spread over the whole viewing zone of the hologram. Obviously, the polygonal surface source should be imitated by a diffuser mounted in the aperture and having shape and tilt angle corresponding to the polygon. Figure 2(b) shows the theoretical model of a surface source of light for wave-field rendering.

Fig. 2

Theoretical model of polygonal surface sources of light (b) that imitate the surface of an object (a).

JEI_21_2_023002_f002.png

2.2.

Surface Function

To compute the wave-field diffracted by the diffuser with polygonal shape, a surface function is defined for each individual polygon in a local coordinate system that is also specific to the individual polygon. An example of the surface function is shown in Fig. 3, where the surface function of polygon 2 of a cubic object in (a) is shown in (b). The surface function hn(xn,yn) for polygon n is generally given in the form

(1)

hn(xn,yn)=an(xn,yn)exp[iϕ(xn,yn)],
where an(xn,yn) and ϕ(xn,yn) are the real-valued amplitude and phase distribution defined in local coordinates (xn,yn,0). The phase pattern ϕ(xn,yn) is not visible in principle, because all image sensors including the human retina can detect only the intensity of light, whereas the amplitude pattern an(xn,yn) directly determines the appearance of the polygon. Therefore, the diffuser should be applied using the phase pattern, while the shape of the polygon should be provided by the amplitude pattern.

Fig. 3

An example of surface functions (b) for polygon 2 of a cubic object (a). The amplitude image of the polygon field (c) after rotational transform agrees with the shape of the original polygon.

JEI_21_2_023002_f003.png

Since this paper discusses the rendering of diffuse surfaces, the phase pattern used for rendering should have a wideband spectrum. In this case, a given single phase pattern can be used for all polygons. Note that if a specular surface is rendered, the spectral bandwidth should be restricted to limit the direction of reflection. Furthermore, the center of the spectrum should be shifted depending on the direction of the polygon.5,21,22

2.3.

Computation of Polygon Fields

The numerical procedure for computing polygon fields is shown in Fig. 4. The surface function of a polygon yielded from vertex data of the polygon can be regarded as the distribution of complex amplitudes; i.e., the wave-field of the surface source of light. However, the surface function is usually given in a plane not parallel to the hologram. This means that the polygon field cannot be computed in the hologram plane using conventional techniques for field diffraction. Therefore, the rotational transform of light23,24 is employed to calculate the polygon field in a plane parallel to the hologram. The resultant wave-field is shown in Fig. 3(c). The polygon field after the rotational transform is then propagated over a short distance employing the angular spectrum method25 (AS) or band-limited angular spectrum method26 (BL-AS) to gather and integrate all polygon fields composing the object in a given single plane. This plane is called the object plane. Here, note that the object plane is not the hologram plane. The object plane should not be placed far from the object, because polygon fields spread more in a farther plane, and thus, the computation takes longer. Therefore, the best solution is most likely to place the object plane so that the plane crosses the object.1

Fig. 4

Numerical procedure for computing entire object fields.

JEI_21_2_023002_f004.png

The polygon fields gathered in the object plane propagate to the hologram or the next object plane closer to the hologram to shield the light behind the object using the silhouette method.18,19 However, the frame buffer for the whole field in the object plane is commonly too large to be simultaneously stored in the memory of a computer. Therefore, the field is segmented and propagated segment by segment, using off-axis numerical propagation such as the shifted Fresnel method1,27 (Shift-FR) or shifted angular spectrum method2,28 (Shift-AS).

3.

Wave-Field Rendering for the Smooth Shading and Texture-Mapping of Diffuse Surfaces

3.1.

Basic Formulation

In the polygon-based method, polygon fields are emitted toward the hologram along the optical axis. The brightness of the surface observed by viewers is estimated by radiometric analysis7:

(2)

Ln(xn,yn)σan2(xn,yn)πtan2ψdcosθn,
where an(xn,yn) is again the amplitude distribution of the surface function whose sampling density is given by σ. The normal vector of the polygon n forms the angle θn with the optical axis as shown in Fig. 5. Here, we assume that the polygon field is approximately spread over the solid angle πtan2ψd by the wide band phase distribution ϕ(xn,yn).

Fig. 5

Radiometric model of reconstructed polygon surfaces.

JEI_21_2_023002_f005.png

The surface brightness given by relation (2) becomes infinite in the limit θnπ/2; i.e., the brightness diverges because it is assumed in the analysis that the hologram can reconstruct light in an unlimited dynamic range. However, the dynamic range of the reconstructed light is actually limited in real holograms because the fringe patterns are never printed in full contrast of transmittance or reflectance and are commonly quantized; e.g., the fringe pattern is binarized in our case. Therefore, we adopt the following simplified and non-divergent formula to estimate the brightness.

(3)

In(xn,yn)=L0(1+γcosθn+γ)an2(xn,yn),

(4)

L0=σπtan2ψd,
where both L0 and γ are constants. The constant γ is introduced a priori to avoid the divergence of brightness. This constant should be determined depending on the printing process of the fringe pattern; it is fitted to the optical reconstruction of the fabricated hologram.

As a result, the wave-field rendering of diffused surfaces is given by

(5)

an(xn,yn)=(cosθn+γ1+γ)Ishape,n(xn,yn)Ishade,n(xn,yn)Itex,n(xn,yn),
where L01 and the brightness In(xn,yn) of Eq. (3) is replaced by the product of three distributions, Ishape,n(xn,yn), Ishade,n(xn,yn) and Itex,n(xn,yn), which are given in local coordinates for each polygon and perform the roles of shaping polygons and shading and texture-mapping surfaces, respectively.

3.2.

Shaping and Shading

The shape of a polygon is given by the amplitude pattern of the surface function of Eq. (1). Thus, the distribution that provides shape to the polygon is given by

(6)

Ishape,n(xn,yn)={1inside polygon0outside polygon.
In the polygon-based method, the shading technique for the object surface is essentially the same as that of CG. The distribution for shading is given by

(7)

Ishade,n(xn,yn)=Is,n(xn,yn)+Ienv,
where Ienv gives the degree of ambient light. When flat-shading is used, the distribution Is,n(xn,yn) is a constant and given by Lambert’s cosine law:

(8)

Is,n(xn,yn)Nn·Li,
where Nn and Li are the normal vector of the polygon n and a unit vector pointing from the surface to the virtual light source of the 3D scene.

Our previous computer holograms such as “The Venus”1 or “Moai I/II”2 were created by flat shading; i.e., using the amplitude pattern given by Eqs. (5) to (8). As a result, the borders of the polygons are visible in the optical reconstruction as shown in Fig. 6. This problem is attributed to the shading technique used.

Fig. 6

Photographs of the optical reconstruction of a polygon-based high-definition computer hologram named “Moai II”.2 The camera is focused on the near moai (a) and far moai (b).

JEI_21_2_023002_f006.png

Well-known smooth shading techniques of CG, such as Gouraud and Phong shading, are also applicable to wave-field rendering. The distribution Is,n(xn,yn) is not a constant but a function of the local coordinates (xn,yn) in these cases. For example, Gouraud shading determines the local brightness within a polygon through linear interpolation of the brightness values at vertexes of the polygon. Thus, the local brightness is the same either side of the border of two polygons.

Figure 7 compares surface functions for flat and smooth shading. The phase distribution of smooth shading in (b) is the same as that of flat shading in (a), but the amplitude distribution in (b) is given by the same technique as used in Gouraud shading in CG. Figure 8 shows the simulated reconstruction of a computer hologram of twin semi-spheres created with flat and Gouraud shading. Here, each semi-sphere is composed of 200 polygons and is 23 mm in diameter. The hologram dimensions are 65,536×32,768pixels. The technique of numerical image formation29 based on wave-optics was used for the simulated reconstruction. It is verified that the seams of polygons are no longer visible for the left sphere (a) created using the same technique as used in Gouraud shading.

Fig. 7

Examples of the surface function for flat shading (a) and Gouraud shading (b).

JEI_21_2_023002_f007.png

Fig. 8

Simulated reconstruction of spheres rendered by Gouraud shading (a) and flat shading (b).

JEI_21_2_023002_f008.png

3.3.

Texture Mapping

Texture mapping is carried out in the polygon-based method simply to provide the distribution Itex,n(xn,yn) using some projection of the texture image onto the polygon. An example of the surface function for texture mapping is shown in Fig. 9(a). Here, the object is a sphere and the mapping image is an astrophotograph of the real moon shown in (b). The distribution Itex,n(xn,yn) of the polygon n is provided by simple orthogonal projection and interpolation of the astrophotographic image as in (c).

Fig. 9

(a) Example of the surface function for texture mapping. An astrophotograph (b) is mapped onto the polygon-mesh sphere by orthogonal projection of the mapping image (c).

JEI_21_2_023002_f009.png

4.

Creation of Computer Holograms

Two high-definition computer holograms named “The Moon” and “Shion” were created using the proposed techniques. The two holograms have the same number of pixels; that is, approximately nine billion pixels. The parameters used in creating these holograms are summarized in Table 1.

Table 1

Summary of parameters used in creating computer holograms “The Moon” and “Shion”.

The MoonShion
Total number of pixels8.6×109 (131,072×65,536)
Number of segments4×2
Reconstruction wavelength632.8 nm
Pixel pitches0.8  μm×1.0  μm0.8  μm×0.8  μm
Size of hologram104.8×65.5  mm2104.8×52.4  mm2
Size of main object (W×H×D)55 mm in diameter60×49×28  mm3
Number of polygons (front face only)7762702
Background object300 point sources of light2356×1571  pixel image
Size of background object (W×H×D)300×180×150  mm384×56  mm2
Rendering parameters (γ, Ienv)0.3, 0.20.01, 0.3

The fringe patterns of the holograms were printed on photoresist coated on ordinary photomask blanks by employing DWL-66 laser lithography system made by Heidelberg Instruments GmbH. After developing the photoresist, the chromium thin film was etched by using the ordinary process for fabricating photomasks and formed the binary transmittance pattern. As a result, the fabricated holograms have fringes of binary amplitudes.

4.1.

The Moon

The Moon is a computer hologram created using the techniques of texture mapping and flat shading. The 3D scene is shown in Fig. 10. The main object is a sphere composed of 1600 polygons. The diameter of the sphere is 55 mm. The mapping image is again an astrophotograph of the real moon shown in Fig. 9(b). The background of this hologram is not a 2D image but 300 point sources of light. Since the intention is that this background is to appear as stars in space, the position and amplitude of these point sources are given by a random-number generator.

Fig. 10

3D scene of “The Moon”.

JEI_21_2_023002_f010.png

The optical reconstruction of The Moon is shown in Fig. 11 and Video 1. Since the light of the background stars is shielded by the silhouette of the moon object, the hologram produces a strong perspective sensation. Seams of polygons are noticed slightly because of the flat shading of the sphere.

Fig. 11

Optical reconstruction of “The Moon” by using transmitted illumination of a He-Ne laser (a) and reflected illumination of an ordinary red LED (b) (Video 1, WMV, 7.8 MB) [URL: http://dx.doi.org/10.11171/1.JEI.21.2.023002.1]. The sphere object is rendered using texture mapping and flat shading.

JEI_21_2_023002_f011.png

4.2.

Shion

Shion is a hologram that reconstructs the live face of a girl. However, the recording of the light emitted from the live face does not have the same meaning as in classical holography. Instead of recording the wave field of the face, the 3D shape of the face was measured using a 3D laser scanner. The polygon mesh measured using a Konica Minolta Vivid 910 device is shown in Fig. 12(a). The photograph simultaneously taken by the 3D scanner was texture-mapped to the polygon-mesh surface, as shown in (b). The object is also shaded using Gouraud shading and placed 10 cm behind the hologram. In addition, a digital illustration is arranged 12 cm behind the face object to make the background.

Fig. 12

Polygon mesh of a live face whose shape is measured using a 3D laser scanner (a), and its rendering with CG using texture mapping.

JEI_21_2_023002_f012.png

The optical reconstruction of Shion is shown in Fig. 13 (Video 2) and Fig. 14 (Video 3). The seams of polygons are no longer perceived because of the implementation of smooth shading. However, there is occlusion error at the edge of the face object. This is most likely attributed to the use of a silhouette to mask the field behind the object. Since the object shape is complicated, the simple silhouette does not work well for light-shielding.

Fig. 13

Optical reconstruction of the polygon-based high-definition computer hologram named “Shion” by using transmitted illumination of a He-Ne laser (Video 2, WMV, 8.8 MB) [URL: http://dx.doi.org/10.11171/1.JEI.21.2.023002.2]. The polygon-modeled object is rendered by texture mapping and Gouraud shading. Photographs (a) and (b) are taken from different viewpoints.

JEI_21_2_023002_f013.png

Fig. 14

Optical reconstruction of “Shion” by using an ordinary red LED (Video 3, WMV, 10.8 MB) [URL: http://dx.doi.org/10.11171/1.JEI.21.2.023002.3].

JEI_21_2_023002_f014.png

5.

Conclusion

Simple rendering techniques were proposed for photorealistic reconstruction in polygon-based high-definition computer holography. The polygon-based method has similarities with common techniques used in CG. Exploiting this similarity, smooth shading and texture mapping are applicable to rendering surface objects in almost the same manner as in CG. The created high-definition holograms are composed of billions of pixels and reconstruct true fine 3D images that convey a strong sensation of depth. These 3D images are produced only as still images at this stage, because current video display devices do not have sufficient display resolution for optical reconstruction. However, the results presented indicate what 3D images may be realized beyond Super Hi-Vision.

Acknowledgments

The authors thank Prof. Kanaya of Osaka University for his assistance in the 3D scan of live faces. The mesh data for the moai objects were provided courtesy of Yutaka_Ohtake by the AIM@SHAPE Shape Repository. This work was supported in part by research grants from the JSPS (KAKENHI, 21500114) and Kansai University (Grant-in-aid for Joint Research 2011–2012).

References

1. 

K. MatsushimaS. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009).APOPAI0003-6935http://dx.doi.org/10.1364/AO.48.000H54Google Scholar

2. 

K. MatsushimaS. Nakahara, “High-definition full-parallax CGHs created by using the polygon-based method and the shifted angular spectrum method,” in Proc. SPIE 7619, 761913 (2010).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.844606Google Scholar

3. 

K. MatsushimaM. NakamuraS. Nakahara, “Novel techniques introduced into polygon-based high-definition CGHs,” in OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging, Optical Society of America, JMA10 (2010).Google Scholar

4. 

K. Matsusimaet al., “Computational holography: real 3D by fast wave-field rendering in ultra-high resolution,” in Proc. SIGGRAPH Posters’ 2010, Association for Computer Machinery (2010).Google Scholar

5. 

H. Nishiet al., “New techniques for wave-field rendering of polygon-based high-definition CGHs,” in Proc. SPIE 7957, 79571A (2011).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.876362Google Scholar

6. 

DigInfo., “Computer-synthesized holograms—the ultimate in 3D images” http://www.diginfo.tv/2010/07/22/10-0130-r-en.php.Google Scholar

7. 

K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005).APOPAI0003-6935http://dx.doi.org/10.1364/AO.44.004607Google Scholar

8. 

J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–407 (1966).APPLAB0003-6951http://dx.doi.org/10.1063/1.1754630Google Scholar

9. 

A. D. SteinZ. WangJ. J. S. Leigh, “Computer-generated holograms: a simplified ray-tracing approach,” Comput. Phys. 6(4), 389–392 (1992).CPHYE20894-1866http://dx.doi.org/10.1063/1.168429Google Scholar

10. 

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electronic Imaging 2(1), 28–34 (1993).JEIME51017-9909http://dx.doi.org/10.1117/12.133376Google Scholar

11. 

A. Ritteret al., “Hardware-based rendering of full-parallax synthetic holograms,” Appl. Opt. 38(8), 1364–1369 (1999).APOPAI0003-6935http://dx.doi.org/10.1364/AO.38.001364Google Scholar

12. 

K. MatsushimaM. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39(35), 6587–6594 (2000).APOPAI0003-6935http://dx.doi.org/10.1364/AO.39.006587Google Scholar

13. 

H. YoshikawaS. IwaseT. Oneda, “Fast computation of Fresnel holograms employing difference,” in Proc. SPIE 3956, 48–55 (2000).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.380022Google Scholar

14. 

N. Masudaet al., “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006).OPEXFF1094-4087http://dx.doi.org/10.1364/OPEX.14.000603Google Scholar

15. 

Y. Ichihashiet al., “HORN-6 special-purpose clustered computing system for electroholography,” Opt. Express 17(16), 13895–13903 (2009).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.17.013895Google Scholar

16. 

L. Ahrenberget al., “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47(10), 1567–1574 (2008).APOPAI0003-6935http://dx.doi.org/10.1364/AO.47.001567Google Scholar

17. 

H. KimJ. HahnB. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008).APOPAI0003-6935http://dx.doi.org/10.1364/AO.47.00D117Google Scholar

18. 

K. MatsushimaA. Kondoh, “A wave-optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” in Proc. SPIE 5290(6), 90–97 (2004).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.526747Google Scholar

19. 

A. KondohK. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38(6), 53–61 (2007).SCJAEP1520-684Xhttp://dx.doi.org/10.1002/(ISSN)1520-684XGoogle Scholar

20. 

M. JandaI. HanákL. Onural, “Hologram synthesis for photorealistic reconstruction,” J. Opt. Soc. Am. A 25(12), 3083–3096 (2008).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.25.003083Google Scholar

21. 

H. NishiK. MatsushimaS. Nakahara, “A novel method for rendering specular and smooth surfaces in polygon-based high-definition CGH,” in OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2011, Optical Society of America, JDWC29 (2011).Google Scholar

22. 

H. NishiK. MatsushimaS. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50(34), H245–H252 (2011).APOPAI0003-6935http://dx.doi.org/10.1364/AO.50.00H245Google Scholar

23. 

K. MatsushimaH. SchimmelF. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A20(9), 1755–1762 (2003).JOSAAH0030-3941http://dx.doi.org/10.1364/JOSAA.20.001755Google Scholar

24. 

K. Matsushima, “Formulation of the rotational transformation of wave fields and their application to digital holography,” Appl. Opt. 47(19), D110–D116 (2008).APOPAI0003-6935http://dx.doi.org/10.1364/AO.47.00D110Google Scholar

25. 

J. W. Goodman, Introduction to Fourier Optics, 2nd ed., Chapter 3.10, McGraw-Hill, New York (1996).Google Scholar

26. 

K. MatsushimaT. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.17.019662Google Scholar

27. 

R. P. MuffolettoJ. M. TylerJ. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15(9), 5631–5640 (2007).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.15.005631Google Scholar

28. 

K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18(17), 18453–18463 (2010).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.18.018453Google Scholar

29. 

K. MatsushimaK. Murakami, “Numrical image formation and their application to digital holography and computer holography,” to be submitted to Opt. Express.Google Scholar

Biography

JEI_21_2_023002_d001.png

Kyoji Matsushima received his BE, ME and PhD degree of applied physics from Osaka City University (Japan). Matsushima joined Department of Electrical Engineering and Computer Science at Kansai University as a research assistant in 1990. He is currently a professor in the Department of Electrical and Electronic Engineering in the same university. His research interests include 3D imaging based on computer-generated holograms and digital holography, and numerical simulations in wave optics.

JEI_21_2_023002_d002.png

Hirohito Nishi received his BE in electrical engineering and computer science and ME in electrical and electronic engineering from Kansai University. He is currently a graduate student of Kansai University. His current interests include 3D imaging based on computer-generated holograms.

JEI_21_2_023002_d003.png

Sumio Nakahara is an associate professor in the Department of Mechanical Engineering in Kansai University (Japan). His PhD degree is from Osaka University in 1987. He joined Department of Mechanical Engineering at Kansai University as a research assistant in 1974. He was adjunct professor position at Washington State University, Pullman, Washington, in 1993–1994. His current research interests are in the development of laser direct-write lithography technology for computer-generated holograms, laser micro processing and MEMS technology.

Kyoji Matsushima, Hirohito Nishi, Sumio Nakahara, "Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography," Journal of Electronic Imaging 21(2), 023002 (26 April 2012). http://dx.doi.org/10.1117/1.JEI.21.2.023002
Submission: Received ; Accepted
JOURNAL ARTICLE
9 PAGES


SHARE
KEYWORDS
Holograms

3D image processing

Holography

Light

Volume rendering

Optical spheres

Video

RELATED CONTENT

The-Fourth-Wave
Proceedings of SPIE (May 23 1984)
Electronic-display-system-for-computational-holography
Proceedings of SPIE (May 01 1990)
Holography-as-a-material-for-light-radical-holography...
Proceedings of SPIE (January 01 1992)

Back to Top