Three-dimensional (3-D) imaging is an emerging differentiator that provides consumers with more realistic and immersive experiences in user interfaces, games, 3-D-virtual reality, and 3-D displays. It has depth information (distance from camera to object) together with the conventional color image so that full information of real objects that human eyes experience can be captured, recorded, and reproduced. So far, stereo type 2 lens 3-D cameras capturing two separate color images have been introduced in the market especially for the application of stereo-vision displays. However, 3-D content will eventually expand to multi- and volumetric views, so-called realistic 3-D contents. To capture the realistic 3-D scene, depth information and high-definition color image should be captured simultaneously to generate views from arbitrary directions as the scene generated from real objects does.
Depth-capturing devices have been developed for games, industry, automobiles, and military applications so far.22.214.171.124.126.96.36.199.10.11.12.–13 Among the existing depth-capturing technologies, structured light method (recently well known as Kinect)2,3 and TOF (time of flight) sensors based on the silicon image sensor [charge-couple device (CCD), complementary metal oxide semiconductor (CMOS) image sensor (CIS)] technology188.8.131.52.–9 are commercialized or close to market release. The structured light method basically utilizes the well-known triangulation principle: depth is calculated by analyzing the image of a particular pattern on the object projected from the projector. Because the projector is placed apart from the image sensor a certain distance, we can extract depth information from the captured pattern image by the triangulation principle.2,3 The advantage of this method is that it can be realized with relatively low cost. Its disadvantage is that it needs a certain distance, normally several centimeters, which makes the overall system size larger in lateral direction than a normal one-lens camera. TOF sensors184.108.40.206.–9 have small form factor: they use infrared (IR) light sources around the imaging lens so that they show camera-like form factors. Their disadvantage compared to pattern projection is that complex pixel structure is needed such as single photon detector4 or phase demodulation pixel structure,56.7.8.–9 which is relatively expensive and delivers a low resolution compared with standard image sensors. The depth image resolution (i.e., number of pixels in depth image) obtained by the above two technologies have been limited to video graphics array (VGA) because of either limitation of pattern (in structured light methods) or complexity of the depth pixel structure (in TOF sensors). Among other TOF technologies, high-speed modulation has been developed in the field of gesture recognition or professional studio applications of depth capturing.1011.12.–13 For example, a depth image with high resolution up to high definition (HD) can be obtained utilizing a high-speed light modulation with an image intensifier combined with an HD standard image sensor. But the image intensifier is very expensive and bulky.11,12
This work’s approach stems from the high-speed modulation method1011.12.–13: depth image up to full high definition (FHD) resolution is obtained utilizing a suggested novel image modulator that has a small form factor and is easily made by mass production processes. A TOF sensor detects the phase delay of the reflected light from the emitted light by the time of flight of the modulated light. For this purpose, the speed of modulation should be sufficiently large, for example 20 MHz, to detect the phase delay of the modulated light traveling up to 15 m (the 20-MHz amplitude modulation of light makes a wavelength of 15 m). In this example, we can resolve the depth 0 to 7.5 m (half of total travel distance of the light) to be detected with full phase range of 0 to 360 deg. This maximizes the resulting depth accuracy. Otherwise, if we use 1 kHz low-speed modulation, for example, the depth 0 to 7.5 m can be detected with a partial phase range of 0 to only 1.8 deg, which results in low depth accuracy. The relevant detailed TOF principle is explained in the next section. This high-speed modulation of image is definitely difficult to achieve by conventional image-modulating devices such as liquid crystal and mechanical camera shutters. In this paper, a novel TOF method using a micro-opto-electro-mechanical systems (MOEMS)-based high-speed light modulator, the so-called optical shutter, is presented for high-resolution depth image capturing. A novel multilayered film structure is designed and fabricated to realize 20-MHz light modulation for TOF operation.
For the commercialization, 3-D image (color plus depth) should be easily captured by a camera-like system with high image quality and affordable price. For this purpose, one lens/two sensor system architecture is prototyped in this work for simultaneous capturing of 14 Mp color and FHD depth image. The optical shutter is positioned in front of a standard complementary metal oxide semiconductor (CMOS) image sensor to modulate the incoming IR images for depth image extraction. The optical shutter design, fabrication, characterization, 3-D camera prototype, and its image test are presented.
Depth capturing is based on the TOF principle as depicted in Fig. 1. The 3-D camera has an IR light source (e.g., 850-nm wavelength) with sinusoidal amplitude modulation (e.g., 20 MHz). It illuminates an object and the reflected IR light comes back to the camera imaging lens. Because the IR light travels two times the distance between the object and camera (so-called depth, ), there is a time delay of the reflected light from that of the illuminated light (so called time of flight, ) such that1415.–16 Detailed description of the entire TOF process is abbreviated here: the reflected IR image is modulated by the optical shutter with the same modulation frequency (i.e., 20 MHz) before being captured by the CMOS image sensor. By allowing additionally controlled phase shifts, for example (0, 90, 180, and 270 deg) between IR light source and optical shutter driving signal, we can get four different IR images () sequentially. The phase delay due to TOF of each pixel can be identified by using the sequentially captured four IR images as
It is notable that the resolution of the depth image (i.e., the number of pixels) is determined by that of the CMOS image sensor, which can have more than FHD resolution in current image sensor technology.
High-Speed Optical Shutter
To enable the depth capturing, the optical shutter should modulate the incoming IR image with 20-MHz on-off speed as explained in the previous section. Conventional image shutters such as mechanical or liquid crystal cannot have 20-MHz modulation speed, since these have moving components. To get extraordinary high speed, the optical shutter is composed of nonmoving solid-state multilayer films, which is the novel concept of this work. The core mechanism of the optical shutter is controllable electro-absorption in multiple quantum well (MQW) combined with Fabry–Perot optical resonance.1718.–19 Figure 2 shows its complete layer structure: from the top side, the optical shutter consists of p-doped electrode; p-doped distributed Bragg reflector (DBR); intrinsic MQW; n-doped DBR; and n-doped electrode. The shutter device is optically a Fabry–Perot narrow bandpass filter whose center wavelength is designed to have the same wavelength of IR light source of the 3-D camera (e.g., 850 nm).
The upper p-DBR and lower n-DBR mutually work as a pair of resonating mirrors, and the middle i-MQW works as a resonance cavity whose optical thickness is a multiple of half of the center wavelength (850 nm).1718.19.–20 Control voltage is applied across the p- and n-electrodes with backward bias so that the light absorption of the i-MQW region is controlled by the well-known quantum confined stark effect (QCSE),21,22 as shown in Fig. 3. As the control voltage increases, the maximum absorption peak, so-called exciton peak, moves from 837 nm (if the electric field is zero) to 850 nm (if the electric field is ). Consequently, at the center wavelength 850 nm of the optical shutter, we have variable transmittance of the input IR image by controlling the applied voltage as simulation shows in Fig. 4.
In the design process, design parameters are material composition, thickness of each layer, and number of layers to maximize transmittance variation at the center wavelength of the optical shutter (850 nm). Figures 2 and 4 show the design result with 52% transmittance variation: when the applied electric field is , there is a small light absorption at 850 nm as shown in Fig. 3 and the transmittance of the optical shutter becomes maximum (67%); otherwise, when the applied electric field is , there is a large absorption at 850 nm and the transmittance of the shutter becomes minimum (15%) for a modulated 850-nm IR image through the optical shutter with 52% transmittance variation.
To get high speed and uniform control of transmittance over the transmitting area, the optical shutter was designed to have a special pattern of p-electrode to realize low sheet resistance. Also, the whole device is divided into electrically separated cells to reduce the capacitance of unit cell. Each cell is driven by an individual external voltage source.23,24 Figure 5 shows the device structure of the shutter having individually controllable cells. Each cell is separated by 10-μm-wide, 4-μm-deep trenches. The shape of the p-electrode (metal) should have small resistance over the top surface (i.e., many metals needed) together with large fill factor, i.e., the portion of the light transmittive area to entire top surface over the IR-receiving window of the shutter needs to be large (i.e., small metal needed). This is a trade-off situation, since metal electrodes make shadows. As a result, we did design optimization to determine the shape of the p-electrode.
For this, electro-optic coupling analysis with 3-D mesh modeling was accomplished to find the optimum shape of the p-electrode as shown in Fig. 6. The design parameter was set to a number of fingers of fishbone-shape metal electrode. By computing the fill factor and the modulation cutoff frequency of the cell structure, we can get the optimum result as shown in Table 1. As the number of fingers increases, the resistance decreases, so the cutoff frequency increases, whereas the fill factor of the active window decreases. As result, 10 fingers is chosen as the optimum design because it shows relatively uniform speed of 19.9 to 20.6 MHz over the entire cell with good fill factor of 95%.
Design optimization result of p-metal electrode. Design parameter is number of fingers of the p-electrode. The finger is metal so the increase of fingers results in a decrease in overall sheet resistance of the device, and in turn, an increase of cutoff frequency, which is inversely proportional to RC-time constant.
|Fingers (n)||Fill factor (%)||Cutoff frequency (MHz)|
|3||97||6.3 to 21.4|
|10||95||19.9 to 20.6|
|20||90||22.2 to 22.7|
The fabrication process has three steps as shown in Fig. 7. First, epitaxial growth of multilayer films on GaAs substrate by metal organic chemical vapor deposition (MOCVD). The thickness and composition of each layer were precisely controlled to have 837-nm exciton peak and 850-nm Fabry–Perot resonance peak; second, cell isolation with dry etching of trenches and p-electrode metal patterning; and third, GaAs substrate at the active window area through which the IR image is transmitted is opened by wet etching of substrate. To get accurate control of substrate etching thickness, an InGaP etch-stop layer between GaAs substrate and epitaxial multilayer films is utilized.24 As result of substrate etching, the multifilm constitutes an electrically controllable IR-transmittive membrane (so-called active window). Figure 8 shows the fabricated device. The size is . The multifilm structure has an active window of . The active window transmits and modulates IR images transferring to grayscale CMOS image sensor (CIS). The entire device is divided into 56 cells with fishbone metal width 10 μm and number of fingers 10 for each cell. Device fill factor is 95%.
Following are four characterization results of the optical shutter. The optical shutter is a kind of large-area p–i–n diode device working in backward bias condition. First, for stable operation of the shutter, breakdown voltage should be sufficiently larger than the operating voltage range. Figure 9 shows current–voltage (IV)-curve measurement for characterization of breakdown voltage. Average was measured. The operating voltage range is to 0 V within nonbreakdown range to 0 V.
Second, to get enough IR intensity and good signal-to-noise ratio, the amount of modulation of IR light should be sufficiently large. This point is characterized by measuring the difference of transmittances between maximum and minimum so-called transmittance variation. When depth is calculated as in Eq. (4), transmittance variation, which makes the difference between intensities of IR images, directly influences the accuracy of the depth. Figure 10 shows measured transmittance spectrum of the optical shutter for different control voltages. Maximum 65% (at 0 V) and minimum 14% (at 9.3 V) transmittances were measured. The transmittance variation is 51% (target 50%). It is notable that the simulation (transmittance variation 52% in Fig. 4) predicted the measurement (Fig. 10) very well.
Third, 20-MHz high speed modulation of the optical shutter was evaluated by its electro-optic frequency response, in which transmitted IR light with frequency-varying sinusoidal electric input is measured. Figure 11 shows the frequency response of the optical shutter. Cutoff frequency ( attenuation) of 20.3 MHz was achieved (target 20 MHz), which is close to simulation prediction summarized in Table 1.
Fourth, the spatial resolution of the transmitted IR image through the optical shutter should be properly preserved to get high-resolution depth image up to FHD (, 2 M pixels). Figure 12 shows the ISO 12233 chart for evaluation of spatial resolution of the optical shutter. The IR image resolution is preserved up to 14 Mp before and after the optical shutter is placed in front of 14-Mp CMOS image sensor (target is FHD, 2 Mp). As a result, the optical shutter does not degrade spatial resolution of the optical system. However, the experiments showed that the optical shutter should be placed 1 to 3 mm away from the focal plane of the CMOS image sensor to blur out the shadow image of metal electrodes on the photo-sensitive area of the image sensor.
3-D Camera System
For commercialization, 3-D images should be easily captured by a camera-like system with high image quality and affordable price. For this purpose, a one-lens/two-sensor system architecture as shown in Fig. 13 was designed and prototyped for simultaneous capturing of 14-Mp color and FHD depth images. The 3-D capturing system is based on the TOF scheme. The 3-D camera system consists of illumination and imaging modules: the illumination module is composed of 850-nm IR LD sources with 20-MHz amplitude modulation and collimating optics for efficient IR beam shaping.25,26 The imaging module consists of imaging lens set, splitter, depth channel, i.e., optical shutter plus black-and-white (BW) CMOS image sensor (CIS), and color channel, i.e., RGB CMOS image sensor. Incoming color and IR images are separated by the splitter according to their wavelength bands, i.e., visible and IR bands, then redirected toward color channel and depth channel simultaneously as shown in Fig. 13. The transmittance spectrum of the fabricated splitter is plotted in Figure 14. The visible band (400 to 700 nm) is reflected with reflectance about 98%, i.e., transmittance below 2%, and IR band around 850 nm is transmitted with transmittance 99%. Since the wavelength division efficiency of the splitter is about 98% to 99% by the filter design shown in Fig. 14, very high separation efficiency of the color and depth images can be achieved without a critical loss of each band’s light energy.
In the depth channel, the optical shutter modulates IR images in 20 MHz, and the resulting modulated image is recorded by the grayscale BW CIS. The depth image is calculated utilizing the modulated images based on the novel homodyne-mixing technique1415.–16 as explained in Sec. 2. It is notable that the resolution of depth image is larger than FHD since the system utilized full resolution of the standard grayscale CIS (FHD to 14 Mp). In the color channel, the color image is captured by a standard color CIS, simultaneously. Color and depth images are processed by unified processor named 3-D image signal processor placed on the back end of the system. As result, FHD depth and 14-Mp color images are captured and processed simultaneously.
The 3-D camera architecture designed in Figure 13 was prototyped using the commercial lens/body, off-the-shelf CIS sets, driver electronics, IR sources; plus developed optical shutter device. Figure 15 shows the structure of the prototype. All components are integrated with Samsung NX-body setup, and 14-Mp color and FHD IR image are captured simultaneously with one-lens/two-sensor setup. Color and IR images are captured and displayed in two LCD displays.
Depth Image Evaluation
The optical shutter approach was evaluated by capturing color and depth images of a series of test objects (Julian, Venus, and flowers: those are 2 to 3 m away from the 3-D camera) with the 3-D camera prototype shown in Fig. 16. An array of 850-nm IR sources having total 500-mW optical output with 20-MHz amplitude modulation is illuminated to the objects. FHD IR images are captured via modulation by the optical shutter. The depth image is extracted using the captured IR images under the suggesting homodyne-mixing scheme1415.–16 described in Eq. (4). The resulting depth image shown in Fig. 17 successfully has FHD resolution (), whereas competing technologies220.127.116.11.7.8.9.–10 generate up to VGA depth images. Depth error (standard deviation) on the Julian face is 0.44 cm at 2 m distance. Bit resolution of BW CIS and color CIS is 10-bit (1024 grayscale steps).
One of the technical challenges in TOF-based depth capturing technology is image stability under sunlight, i.e., outdoor environment. The TOF principle works if provided with sufficient IR illumination to the captured objects, but in case of strong sunlight, the captured IR image will have relatively small signal (i.e., IR illumination) to noise (i.e., sunlight) ratio. Therefore the suppression of sunlight is critical to get stable capturing of the depth images in an outdoor environment. To do that, three novel sunlight-suppression techniques have been applied in this work as follows. First (wavelength domain approach), the optical shutter itself is an 850-nm monochrome filter as shown in transmittance spectrum of Fig. 10, so that the sunlight component outside band is filtered out whereas most of the light energy of the IR source is transmitted. Second (frequency domain approach), since sunlight is constant in time (DC) compared with the speed of 20-MHz modulating IR signal, the influences of sunlight in modulated images () are almost equal so that the sunlight effect is canceled by subtracting IR images as in Eq. (4) under the scheme of the novel homodyne-mixing scheme.1415.–16
Third (time domain approach), the IR source illuminates intensively during a short time interval and is turned off in the rest of time; and the optical shutter closes at that IR turn-off interval by synchronizing the optical shutter to the IR source, then sunlight at the turn-off interval is blocked. This approach is called synchronized burst IR-shutter mode, which comes from the benefit of the developed optical shutter that can open and close the image plane globally,16 whereas most existing CMOS image sensors apply rolling shutters.
By applying above three approaches suppressing the influence of sunlight, depth image was captured and compared under the room light condition (0.3 klux) and sunlight condition (27 klux: outdoor) as shown in Fig. 18. As usual, the sunlight condition yields overexposed or malfunction-of-depth cameras in competing technologies. Depth error in room light condition is 0.9% (2.3 cm at 2.5 m), including the noise reduction image process. This work’s depth image in sunlight condition is successfully stable to get a fair depth error of 1.2% (3.1 cm at 2.5 m) compared to room light condition.
A novel 20-MHz high-speed image-shuttering device and its application to 3-D image capturing system were presented. To get extraordinary high speed of the image shutter, solid-state multilayer structure having electro-absorption mechanism combining with optical resonance cavity is designed, fabricated, and characterized. Cell structure and electrode shape are optimized with a simulation-based modeling of electro-optical mechanisms. MOEMS-based etching and patterning technology were utilized for device fabrication. As result, transmittance variation 51% and switching speed 20 MHz were obtained, which are required for time-of-flight operation of the 3-D camera. It is notable that systematic modeling and simulation of electro-optic mechanism predicts the real behavior of the optical shutter well.
One-lens/two-sensor architecture for simultaneous capturing of color and depth images was prototyped using commercially available body/lens sets and CIS components. The suggested optical shutter approach enables capturing of a FHD resolution of depth image, which results in the highest resolution among the state-of-the-art depth camera technologies. Especially, the depth image is stable under the sunlight conditions, which solves a critical technical challenge in the TOF depth sensing field.
Deliverables can enable 3-D business such as 3-D image capturing, user interface, and 3-D display, especially for camera and display business. A graphic multiview generation using color and depth information is underway for the interface to stereo and multiview 3-D displays. Further, optimization of the quality of the depth image, application-specific integrated circuit (ASIC)-based noise cancellation, will be added on the current achievements toward commercialization in near future.
This paper was previously published as a proceedings paper at SPIE MOEMS and Miniaturized Systems XI, San Francisco, CA, USA, 2013 as “Micro optical system based 3-D imaging for FHD depth image capturing.”1
Y. H. Parket al., “Micro optical system based 3-D imaging for full HD depth image capturing,”Proc. SPIE 8252, 82520X (2012).Google Scholar
L. XiaC. C. ChenJ. K. Aggarwal, “Human detection using depth information by Kinect,” in IEEE Comp. Soc. Conf. on Computer Vision and Pattern Recognition Workshops, pp. 15–22, IEEE Computer Society, Colorado Springs, CO (2011).Google Scholar
C. Niclasset al., “A single-photon image sensor with column-level 10-bit time-to-digital converter array,” IEEE J. Solid-State Circuits 43(12), 2977–2989 (2008).IJSCBC0018-9200http://dx.doi.org/10.1109/JSSC.2008.2006445Google Scholar
T. Oggieret al., “Novel pixel architecture with inherent background suppression for 3-D time-of-flight imaging,” Proc. SPIE 5665, 1–8 (2005).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.586933Google Scholar
D. Stoppaet al., “A range image sensor based on 10-μm lock-in pixels in 0.18-μm CMOS imaging technology,” IEEE J. Solid-State Circuits 46(1), 248–258 (2011).IJSCBC0018-9200http://dx.doi.org/10.1109/JSSC.2010.2085870Google Scholar
S. J. Kimet al., “A CMOS image sensor based on unified pixel architecture with time-division multiplexing scheme for color and depth image acquisition,” IEEE J. Solid-State Circuits 47(11), 2834–2845 (2012).IJSCBC0018-9200http://dx.doi.org/10.1109/JSSC.2012.2214179Google Scholar
G. YahavG. J. IddanD. Mandelboum, “3-D imaging camera for gaming application,” in Consumer Electronics, 2007. ICCE 2007. Digest of Technical Papers. International Conference on, Las Vegas, NV, pp. 1–2 (2007).Google Scholar
M. Kawakitaet al., “High-definition real-time depth-mapping TV camera; HDTV Axi-Vision camera,” Opt. Express 12(12), 2781–2794 (2004).OPEXFF1094-4087http://dx.doi.org/10.1364/OPEX.12.002781Google Scholar
A. A. Dorringtonet al., “Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera,” Meas. Sci. Technol. 18(9), 2809–2816 (2007).MSTCEP0957-0233http://dx.doi.org/10.1088/0957-0233/18/9/010Google Scholar
Y. H. ParkJ. W. You, “Method and apparatus for calculating a distance between an optical apparatus and object,” KP2010-0005753, U.S. Patent 12/837,814 (2012).Google Scholar
Y. H. ParkJ. W. YouY. C. Cho, “Three-dimensional image acquisition apparatus and method of extracting depth information in the 3-D image acquisition apparatus,” KP2010-0133720, U.S. Patent 13/160,135 (2011).Google Scholar
Y. H. ParkJ. W. YouH. S. Yoon, “3-D camera with ambient light suppression,” KP2011-0109431, U.S. Patent 13/594,094 (2012).Google Scholar
Y. C. Choet al., “Optoelectronic shutter, method of operating the same and optical apparatus including the optoelectronic shutter,” KP2009-0049475, U.S.-2010-0308211-A1 (2010).Google Scholar
Y. C. Choet al., “Optical modulator,” KP2010-0006052, U.S.-2011-0181936-A1 (2011).Google Scholar
Y. C. Choet al., “Optical modulator using multiple Fabry–Perot resonant modes and apparatus for capturing 3-D image including the optical modulator,” KP2010-0137229, U.S. Patent 13/163,202 (2010).Google Scholar
G. R. Fowles, Introduction to Modern Optics, pp. 86–103, Dover Books, Mineola, NY (1989).Google Scholar
S. L. Chuang, Physics of Optoelectronic Devices, pp. 557–569, Wiley, New York (1995).Google Scholar
K. W. GoossenJ. E. SunninghamW. Y. Jan, “Electroabsorption in ultranarrow-barrier GaAs/AlGaAs multiple quantum well modulators,” Appl. Phys. Lett. 64(9), 1071–1073 (1994).APPLAB0003-6951http://dx.doi.org/10.1063/1.110935Google Scholar
Y. C. Choet al., “Optical image modulator and method of manufacturing the same,” KP2010-0122678, U.S. Patent 13/167,486 (2010).Google Scholar
S. H. LeeC. Y. ParkJ. H. Lee, “Method of manufacturing the optical image modulator,” KP2011-0096984, U.S. Patent 13/531,964 (2012).Google Scholar
Y. H. Parket al., “Illumination optical system and 3-D image acquisition apparatus including the same,” KP2010-0127866, U.S. Patent 13/244,329 (2010).Google Scholar
Y. H. Parket al., “Optical system having integrated illumination and imaging optical systems, and 3-D image acquisition apparatus including the optical system,” KP2010-0127867, U.S. Patent 3/156,789 (2010).Google Scholar
Yong-Hwa Park received BS, MS, and PhD degrees in mechanical engineering from the Korea Advanced Institute of Science and Technology in 1991, 1993, and 1999, respectively. In 1998, he was selected as a future frontier scientist from the Korea Research Foundation. In 2000, he joined the MEMS research group in the University of Colorado at Boulder as a research associate. From 2003 to 2005, he was working in the visual display division in Samsung Electronics, Co. Ltd. From 2005, he joined the microsystems laboratory in the Samsung Advanced Institute of Technology as a principal researcher in optical MEMS design and applications to imaging and display systems. His major research activities in the MEMS area include dynamic analysis and design of RF/optical MEMS, and imaging and display systems. He is a member of the Society for Information Display, SPIE, the Society for Automotive Engineers, the Society of Experimental Mechanics, and IEEE. Among them, especially he works as a conference chair of MOEMS and miniaturized systems in SPIE Photonics West from 2012.
Yong-Chul Cho received MS and PhD degrees in mechanical engineering from the Korea Advanced Institute of Science and Technology in 1986 and 1992, respectively. From 1992 to 1998, he was a member of the research staff in Samsung Electronics, Co. Ltd, working on various machine vision systems. In 1999, he joined the microsystems laboratory in the Samsung Advanced Institute of Technology, and has been engaged in research and development of microsystems such as gyroscope sensors, optical scanners, and optical modulators. His major research activities in the MEMS area includes micropackaging, reliability analysis, and characterization. He is a member of the Institute of Control, Robotics, and Systems.
Jang-Woo You received MS and PhD degrees in mechanical engineering from the Korea Advanced Institute of Science and Technology in 2002 and 2009, respectively. From 2005 to 2006, he joined the Wellman Center for Photomedicine at Boston, MA, as a research fellow to study optical coherence tomography for in vivo human retina imaging. In 2009, he joined the microsystems laboratory in the Samsung Advanced Institute of Technology as a researcher in optical device and system design. His major interest is developing advanced TOF camera for the user interface.
Chang-Young Park received MS and PhD degrees in optoelectronics from the Gwangju Institute of Science and Technology in 2004 and 2011, respectively. His research fields are III–V compound semiconductor device epitaxial growth and characterization such as laser diode, photodetector, optical modulator, tandem solar cell, quantum dot, nanorods, etc. In 2011, he joined the microsystems laboratory in the Samsung Advanced Institute of Technology and major research activities are high-speed optical modulator (optical shutter) design, simulation, fabrication and characterization.
Hee-Sun Yoon received MS and PhD degrees in mechatronics from the Gwangju Institute of Science and Technology in 2006 and 2011, respectively. In 2011, he joined the microsystems laboratory in the Samsung Advanced Institute of Technology, and has been engaged in research and development of 3-D camera system. His major research activities are design and analysis of imaging system, electric circuit design and characterization.
Sang-Hun Lee received the BS, MS, and PhD degrees in electrical engineering from Seoul National University in 1991, 1993, and 1998, respectively. In 1998, he joined Samsung Electronics, Co. and Samsung Advanced Institute of Technology, where he has studied the field of optical, inertial, RF, Bio MEMS. His current research interests are design and fabrication of micro/nano devices, especially sensors and optical devices.
Jong-Oh Kwon received BS and MS degrees in Ceramic engineering from the Yon Sei University in 1995 and 1997, respectively. From 1997 to 2005, he had been a packaging engineer in Samsung Electro-Mechanics Company, working in the wireless device division. In February 2005, he joined the microsystems laboratory in the Samsung Advanced Institute of Technology, and has been engaged in research and development of microsystems package such as wafer level package of RF device. His major research activities in the package area includes chip scale package, wafer level package and system in package design, processing and evaluation.
Seung-Wan Lee received MS degrees in electro-material engineering from the Kyeong-Buk University in 1994. He has worked for Samsung Electronics Co. Ltd since 1986. He helped develop new products in fiber optics until 2000, special fiber and optical passive devices. He has been engaged in research and development of micro-systems such as the MEMS camera for mobile phone and the fluxgate sensor in the Samsung Advanced Institute of Technology. Since 2000, he has been in charge of the optical system design for medical device.
Byung Hoon Na received his MS degree in Information and Communications from Gwangju Institute of Science and Technology (GIST), Gwangju, Korea, in 2008. Since 2008, he has been pursuing his PhD degree at the School of Information and Mechatronics, GIST, and is a member of the optoelectronics laboratory, GIST. He is also working with Samsung Advanced Institute of Technology. His current research interests include electro-optic modulator, molecular beam epitaxial growth and their applications in optoelectronic devices.
Gun Wu Ju received BS degrees in electrical engineering from Kyungpook National University in 2009. He is now a pursuing a PhD degree in Gwangju Institute of Science and Technology (GIST) and is a member of the optoelectronics laboratory, GIST. His research interests include design, growth and characterization of electro-absorption modulator for optical shutter, RCEPD and VCSEL for optical sensor applications.
Hee Ju Choi received a BS degree in semiconductor at Chonbuk national university in 2009, and an MS degree in photonics and applied physics from Gwangju Institute of Science and Technology (GIST) in 2011. Since 2011, he has been with the department of photonics and applied physics at GIST, where he is pursuing a PhD degree and is a member of the optoelectronics laboratory at GIST. His research interests include fabrication of semiconductor devices, nanotechnology, and biosensors.
Yong Tak Lee received BS degree from Seoul National University, Korea, in 1975, and MS and PhD degrees (with honors) from Korea Advanced Institute of Science and Technology, all in applied physics, in 1979, and 1990, respectively. He joined in ETRI in 1979, and headed the optoelectronics section from 1987 to 1990, the compound semiconductor department from 1991 to 1992. He was a visiting scientist with the Department of Electronic Engineering, University of Tokyo, Japan from 1986 to 1987, and with the microelectronics laboratory, University of Illinois at Urbana-Champaign, USA, from 1993 to 1994. Since 1994, he has been a professor with the School of Information and Communications, Gwangju Institute of Science and Technology, Gwangju, Korea, and is heading the optoelectronics laboratory there. He has been also an adjunct professor with Edith Cowan University, Australia, since 2007 and Southeast University, China, since 2009. His current research interests include semiconductor laser diode, photodetector, electro-optic modulator, imaging sensor, Opto-VLSI, SOA, LEDs, solar cells, integrated photonic circuits, nanophotonic devices, bio photonics, chip-to-chip optical interconnects, micro beam projection system, etc. He has authored or co-authored more than 103 patents (filed or applied), 173 papers in SCI journals and 288 conference proceedings. Professor Lee is a recipient of many awards from academic societies, including top 100 national R&D performance in 2011, best poster presentation awards at IUMRS-ICEM in 2010, minister’s award for Research Innovation at NANO-KOREA 2010, presidential medal in science and technology, Republic of Korea in 2009, plaque of recognition for distinguished service, optical society of Korea in 1997, presidential citation for distinguished researcher, Republic of Korea in 1986, minister’s citation for distinguished researcher, ministry of communications in 1985, and best paper of the year award, ETRI in 1981.