We report on a novel state-of-the-art diffraction optical elements (DOE) based waveguide architecture for aug- mented reality (AR) display with increased field of view and method for analytical design of such an architecture. The effectiveness of the architecture results from the multiple usage of the same propagation directions inside the waveguide by different field of view parts. Unlike in previous solutions, where such approach would lead to crosstalk generation, the proposed architecture different field of view parts are propagated in different waveguide locations, separated by the corresponding DOEs. The architecture can be applied either for increasing the verti- cal field of view size or the horizontal field of view size with compensation of chromatic dispersion resulting from the diffraction. The architecture configuration, analytical derivations of the DOEs parameters, and modeling results are discussed. The architecture satisfies market demands for the form-factor, size and weight, as well as allows up to four times increase of the field of view size in comparison with the conventional solutions. For the DOEs refractive index of 1.5, the architecture provides 48x44 degrees white-light field of view within two waveguides and 56x56 degrees white-light field of view within three waveguides. For the DOEs refractive index of 1.9, the architecture provides 58x58 degrees white-light field of view within only one waveguide.
The mismatch between positions of virtual images and a see-through view constitute a serious problem in virtual and augmented reality optical systems with a single projection plane. These issues may lead to a user’s discomfort: eye fatigue, headache and nausea. In order to solve these problems a tunable lens forming several projection planes at different locations can be used. Developed varifocal lens consists of two tunable liquid crystal cells. The first cell for fine adjustment varies optical power from 1 D to 3 D, the second cell for coarse adjustment varies power from 0.25 D to 1 D. The total dioptric range is -4 D … +4 D with an equidistant step of 0.25 D that forms 33 projection planes. Electrode pattern made of indium zinc oxide consists of rings corresponding to Fresnel zones, each zone is divided to multiple subzones. In order to minimize the number of control electrodes (bus lines) and keep high diffraction efficiency, the bus lines shunt together all of the corresponding sub-zones in all of the zones. Developed lens is tested with AR glasses based on a holographic waveguide. Displacement of virtual image from 250 mm to 1 meter is demonstrated.
Augmented reality (AR) systems are of huge interest for last decade since they are predicted to be the next generation of mobile devices for consumers. One of the key parameters in terms of AR systems properties is the field of view. The best performance in this regard is shown by DOE/HOE-based planar-waveguides systems since they can provide the widest field of view among other approaches even with the simplest architecture. However it is still not wide enough for consumers, so more complex architectures are created. In this work, a novel approach for reaching wide field of view is proposed. It is based on the eyebox magnification in two directions by two different waveguides systems. The first system provides magnification along the axis with wider field of view and consists of waveguides inclined along the field of view central beam with HOE-based 1D gratings, providing the TIR diffraction in both +1 and -1 orders. The TIR condition in this case is reached more easily because of inclination, so the wider angular spectrum can be transferred. The second system provides magnification along the axis with narrower field of view and consists of conventional HOE-based periscope system with in-coupling and out-coupling zones. The system working principle, HOEs specifications, main advantages, challenges and solutions are discussed. The proposed system allow 60-degrees diagonal field of view for the white (RGB) color.
Factored light-field (LF) technology helps resolving the vergence-accommodation conflict inherent to the most of conventional stereoscopic displays. The remaining challenges include decreasing computation cost of light-field factorization and improving image quality. We prototyped a dual-layer light-field stereoscope with a smartphone used as a display. We implement and compare three different methods of rank-one LF factorization and two ways of initializing them. The weighted rank-one residual iterations (WRRI) and the weighted nonnegative matrix factorization (WNMF) proved almost twice faster than Huang et al.’s method in our implementation. Our tests revealed that the best way of initialization for all the three methods is that by the square root of the LF central view values; namely, one-two iterations are enough to achieve acceptable image quality.
We propose multimodal sensor and algorithm for automatic recognition of a food intake based on glycemic response. Embedding this sensor in a wearable device makes it possible to count number of meals at a given time and to generate personalized statistical pattern of eating habits. This pattern may have significant impact on both personal health care and big-data-driven social engineering. We use near-infrared diffuse reflectance spectroscopy, bioimpedance measurements, and binary classification for non-invasive continuous glucose trend measurements and Fourier transform based time frequency analysis of glycose trends for characterization of eating patterns and prediction of digestive system abnormalities. We tested the sensor in a series of experiments with the certain type of food and achieved 45% average accuracy of a food intake recognition with the random noise level being at 25%.
Unhealthy nutrition trends determination technique is described. Combination of optical spectroscopy and electrical
impedancemetry will lead to development of a healthcare device that will predict unhealthy eating habits and decrease
risk factors of diseases development.