<p>Our structured illumination microscopy (SIM) is based on a spatial light modulator (SLM) instead of an illumination mask, which does not need to be attached to a linear stage. This SIM can easily design the period of the one-dimensional grid related to the optical sectioning strength and can rapidly acquire three-dimensional data. The optimization of SIM with an SLM is proposed. Previous studies primarily varied magnification with a high numerical aperture objective to optimize the axial response. It is feasible to obtain the maximum optical sectioning strength by designing a grid pattern that has an appropriately high spatial frequency and to uniformly cover the entire frequency spectrum of the sample by rotating a grid pattern. We have successfully optimized SIM with such a grid and covered the frequency spectrum by rotating a grid pattern in multiple orientations.</p>
Animals see the world through their eyes. Even though plants do not have organs of the visual system, plants are receptive to their visual environment. However, the exact mechanism of vision in plants has yet to be determined. For plants, vision is one of the important senses because they store energy from light. Light is not only the source of growth but also a vector of information for plants. Photosynthesis is one of the typical phenomena where light induces the response from plants. Photosynthesis is the process that coverts light energy into chemical energy and produces oxygen. In this study, we have emulated the three-dimensional vision in plants by artificial photosynthesis. Instead of using real plant cell, we have exploited the artificial photosynthetic properties of photoelectrochemical (PEC) cell. The siliconbased PEC cell sensitive to red/far-red region (600 - 850 nm) was used as a single-pixel sensor, and a mechanical scanner was used to simulate two-dimensional sensor array with a single-pixel sensor. We have successfully obtained the result by measuring photocurrents generated by photosynthetic water splitting.
The properties of photoelectrochemical (PEC) cells have mainly been investigated with a focus on PEC hydrogen production. Because anodic current begins to flow when PEC cell is under illumination, and that this current varies as a function of light intensity, PEC cells can be used as a photodetector. Different from other image sensors, PEC cells can detect the light immersed in solutions due to their PEC properties. To verify the feasibility of using silicon-based PEC cell as an image sensor, we demonstrated a single pixel imaging system based on compressive sensing. Compressive sensing is an algorithm designed to recover signals from a small number of measurements, assuming that the signal of interest can be represented in a sparse way. In this study, we have demonstrated multispectral imaging using a siliconbased PEC cell with compressive sensing. The images were obtained in three primary colors (red, green, and blue). Due to the high photoresponse, stability and unique characteristic that silicon-based PEC cell can be used underwater, the silicon-based PEC cell is expected to be utilized in the future as a photodetector for various applications. We believe this study would be a great example of advanced developments in an optoelectronic system based on PEC cells.
Proc. SPIE. 10233, Holography: Advances and Modern Trends V
KEYWORDS: Holograms, Clouds, Holographic interferometry, Sensors, Digital holography, Data acquisition, Data modeling, 3D modeling, Wave propagation, Computer generated holography, Holography, RGB color model
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
This article proposes a 3D reconstruction method using multiple depth cameras. Since the depth camera acquires the depth information from a single viewpoint, it’s inadequate for 3D reconstruction. In order to solve this problem, we used multiple depth cameras. For 3D scene reconstruction, the depth information is acquired from different viewpoints with multiple depth cameras. However, when using multiple depth cameras, it’s difficult to acquire accurate depth information because of interference among depth cameras. To solve this problem, in this research, we propose Time-division multiplexing method. The depth information was acquired from different cameras sequentially. After acquiring the depth images, we extracted features using Fast Point Feature Histogram (FPFH) descriptor. Then, we performed 3D registration with Sample Consensus Initial Alignment (SAC-IA). We reconstructed 3D human bodies with our system and measured body sizes for evaluating the accuracy of 3D reconstruction.