Spectral confocal technology is an important three-dimensional measurement technology with high accuracy and non-contact; however, traditional spectral confocal system usually consists of prisons and several lens whose volume and weight is enormous and heavy, besides, due to the chromatic aberration characteristics of ordinary optical lenses, it is difficult to perfectly focus light in a wide bandwidth. Meta-surfaces are expected to realize the miniaturization of conventional optical element due to its superb abilities of controlling phase and amplitude of wavefront of incident at subwavelength scale, and in this paper, an efficient spectral confocal meta-lens (ESCM) working in the near infrared spectrum (1300nm-2000nm) is proposed and numerically demonstrated. ESCM can focus incident light at different focal lengths from 16.7 to 24.5μm along a perpendicular off-axis focal plane with NA varying from 0.385 to 0.530. The meta-lens consists of a group of Si nanofins providing high polarization conversion efficiency lager than 50%, and the phase required for focusing incident light is well rebuilt by the resonant phase which is proportional to the frequency and the wavelength-independent geometric phase, PB phase. Such dispersive components can also be used in implements requiring dispersive device such as spectrometers.
Metasurfaces are expected to realize the miniaturization of conventional refractive optics into planar structures; however, they suffer from large chromatic aberration due to the high phase dispersion of their subwavelength building blocks, limiting their real applications in imaging and displaying systems. In this paper, a high-efficient broadband achromatic metasurface (HBAM) is designed and numerically demonstrated to suppress the chromatic aberration in the continuous visible spectrum. The HBAM consists of TiO2 nanofins as the metasurface building blocks (MBBs) on a layer of glass as the substrate, providing a broadband response and high polarization conversion efficiency for circularly polarized incidences in the desired bandwidth. The phase profile of the metasurface can be separated into two parts: the wavelength -independent basic phase distribution represented by the Pancharatnam-Berry (PB) phase, depending only on the orientations of the MBBs, and the wavelength-dependent phase dispersion part. The HBAM applies resonance tuning for compensating the phase dispersion, and further eliminates the chromatic aberration by integrating the phase compensation into the PB phase manipulation. The parameters of the HBAM structures are optimized in finite difference time domain (FDTD) simulation for enhancing the efficiency and achromatic focusing performance. Using this approach, this HBAM is capable of focusing light of wavelengths covering the entire visible spectrum (from 400 nm to 700 nm) at the same focal plane with the spot sizes close to the diffraction limit. The minimum polarization conversion efficiency of most designed MBBS in such spectrum is above 20%. This design could be viable for various practical applications such as cameras and wearable optics.
We report an approach to enhance the resolution of the microscopy imaging by using the fourier ptychographic microscopy (FPM) method with a laser source and Spatial Light Modulator (SLM) to generate modulated sample illumination. The performance of the existed FPM system is limited by low illumination efficiency of the LED array. In our prototype setup, digital micromirror device (DMD) is introduced to replace the LED array as a reflective spatial light modulator and is placed at the front focal plane of the 4F system. A ring pattern sample illumination is generated by coding the micromirrors on the DMD, and converted to multi-angular illumination through the relay illumination system. A series of intensity sample images can be obtained by changing the size of the ring pattern and then used to reconstruct high resolution image through the ring pattern phase retrieval algorithm. Finally, our method is verified by an experiment using a resolution chart. The results also show that our method have higher reconstruction resolution and faster imaging speed.
Computational imaging spectrometry provides spatial-spectral information of objects. This technology has been applied in biomedical imaging, ocean monitoring, military and geographical object identification, etc. Via compressive sensing with coded apertures, 3D spatial-spectral data cube of hyperspectral image is compressed into 2D data array to alleviate the problems due to huge amounts of data. In this paper, a 3D convolutional neural network (3D CNN) is proposed for reconstruction of compressively sensed (CS) multispectral image. This network takes the 2D compressed data as the input and gives an intermediate output, which has identical size with the original 3D data. Then a general image denoiser is applied on it to obtain the final reconstruction result. The network with one fully connected layer, six 3D convolutional layers is trained with a standard hyperspectral image dataset. Though the compression rate is extremely high (16:1), this network performs well both in spectral reconstruction, demonstrated with single point spectrum, and in quantitative comparison with original data, in terms of peak signal to noise ratio (PSNR). Compared with state-of-the-art iterative reconstruction methods e.g. two-step iterative shrinkage/thresholding (TwIST), this network features high speed reconstruction and low spectral dispersion, which potentially guarantees more accurate identification of objects.
A novel method is proposed in this paper to accurately reconstruct the three-dimensional scenes by using a passive single-shot exposure with a lenslet light field camera. This method has better performance of 3D scenes reconstruction with both defocus and disparity depth cues captured by light field camera. First, the light field data is used to refocus and shift viewpoints to get a focal stack and multi-view images. In refocusing procedure, the phase shift theorem in the Fourier domain is first introduced to substitute shift in spatial domain, and sharper focal stacks can be obtained with less blurriness. Thus, 3D scenes can be reconstructed more accurately. Next, through multi-view images, disparity depth cues are obtained by performing correspondence measure. Then, the focal stack is used to compute defocus depth cues by focus measure based on gray variance. Finally, the focus cost is built to integrate both defocus and disparity depth cues, and the accurate depth map is estimated by using Graph Cuts based on the focus cost. Using this accurate depth map and all-in-focus image, the 3D structure in real world are accurately reconstructed. Our method is verified by a number of synthetic and real-world examples captured with a dense camera array and a Lytro light field camera.