PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11876, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the past decade, optics and display hardware developments for mixed reality and smart glasses were merely a shot in the dark, providing enough display immersion and visual comfort for developers to build up apps, especially for the enterprise field. Today, as universal use cases for consumer emerge such as co-presence, digital twin and remote conferencing, new optical functionalities are required to enable such experiences. It is not only a race to smaller form factor and light weight devices for large field of view (FOV) and lower power, but the requirements are also on additional display and sensing features specifically tuned to implement such new universal use cases. Broad acceptance of wearable displays especially in the consumer field is contingent on enabling these new display and sensing requirements in small form factors and low power. This talk will focus on waveguide combiner technologies and how these architectures have evolved over the past years to address such new requirements
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
My colleagues, Dr. Nils Haverkamp and Dr. Richard Youngworth, and I would like to welcome
you to the second Optical Instrument, Science, Technology, and Applications conference at the
2021 SPIE European Optical design event.
Optical instruments are a critical area of development that enable science and many emerging
engineering technologies. This second conference consists of four sessions of high-quality
presentations, a poster session, a live networking session, and proceedings articles. This year’s
sessions focus on developments and advancements in medical devices and microscopes,
instruments and concepts for 2D and 3D imaging, structured light, computational methods,
prototyping, and metrology.
From the call for papers: “This Optical Instrument Science, Technology, and Applications
conference has been created to further enable the integration of components, design, and
modelling key to successful optical instrument development and applications. The focus of this
conference is on optical systems and instruments, along with applications enabled by such
methods.” The conference also provides a forum to encourage technology development that is
imperative for the future optical instrument science and technology advancement.
We plan to continue this conference at the next European Optical Design event. We encourage
everyone interested in optical instrument science, technology, and applications to look for the
call for papers and to submit your work. We certainly value the quality submissions as well as
the opportunity to help facilitate and take part in the community’s interaction. Please feel free to
contact us or anyone on our program committee if you have any questions. We look forward to
seeing you at the next event to further discuss this exciting area of optics and photonics.
We sincerely thank our contributed presenters and the wonderfully supportive community for
making the sessions of this conference such a success. We must also thank our excellent program
committee and the SPIE staff for their ideas and promotion of this conference.
Dr. Breann N. Sitarski,
on behalf of SPIE and her Co-Chairs
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of the first lattice light-sheet microscope equipped with incoherent holographic detection for neuronal imaging is presented. The device is designed for the capture of 3-D complex amplitude images moving neither the sample stage nor the detection microscope objective. The system is built onto a conventional lattice light-sheet (LLS) microscope, as a second detection arm, equipped with an incoherent holographic optical design and a monochromatic CMOS sensor. The compact system could be mounted on any lattice light-sheet and light sheet instruments due to flexibility of changing the numerical aperture of the excitation light by changing the anulus of the diffraction mask. For this study, fluorescence imaging is supported by illumination at 488 nm. This work relies upon the use of the self-interference property of the emitted fluorescent light, in which three or four Fresnel patterns are projected onto samples to create interference patterns of a 3-D object using a phase shifting technique. The projection of the diffraction patterns of samples is achieved with a spatial light modulator which allows single (IHLLS 1L) or dual lens (IHLLS 2L) patterns superposition with randomly selected pixels. The focal lengths of the lenses are calculated in two steps using OpticStudio (Zemax, LLC) design to provide the optimal compromise between the two requirements for magnification and dual-beam size matching at the camera plan on the one hand and space between the objective and camera on the other. We used the IHLLS-1L for calibration purposes and IHLLS-2L for recording sample holograms. The system allows the generation of high-resolution amplitude and phase images with larger scanning area and depth of field than the original LLS. Neuronal 3-D maps are built from sets of images acquired at various z-sections, determined by galvanometric mirror depth positions in the sample. This paper describes the concept of the instrument and details its optical design. This paper briefly describes the concept of the instrument and details its optical design. An overview of the key performances is also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processes like single-cell isolation demand an immensely sterile environment and a remarkably magnified plat-form, such as the optical microscope, to analyze the growth of the cell lines and in the precise selection of cells from them. State-of-the-art technologies fail to provide microscopes compatible inside the biosafety cabinets, and therefore the samples are constantly moved from the safety cabinets to the microscope repeatedly for analysis. This, in turn, increases the risk of contamination of the sample as well as the laboratory surroundings. We report the design and development of an automated optical microscope adaptable in the safety cabinet and can be employed for cell isolation processes. Experiments were carried out using the developed imaging system, and results reveal the system’s reliability in cell biology applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging technology is an emerging functional imaging method in the field of biomedical applications. It combines the light absorption characteristics of tissues with the advantages of ultrasonic detection, and has the advantages of strong contrast, high sensitivity, and deep imaging depth. Therefore, this article uses the finite element software COMSOL Multiphysics to study the thermal expansion process of the photoacoustic effect caused by the interaction of ultrashort laser pulses with tissues. In COMSOL, a laser with a pulse width of 5 ns and a wavelength of 532 nm is used as the excitation light source. Use mathematics module, heat transmission module, solid mechanics module and pressure acoustics module to simulate the process of photothermal conversion-thermal expansion-generation of ultrasound in the photoacoustic imaging of the Gastric tissue-tumor system. In this way, the photoacoustic signal and its image are obtained. This research focuses on exploring the thermal expansion process in the photoacoustic effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical imaging techniques such as voltage-sensitive dye imaging and intrinsic imaging allow for the record- ing of neuronal activity at high spatio-temporal scales over a large field of view revealing some mesoscopic scale dynamics such as propagating waves. In practice however, the achievable image quality deteriorates significantly away from the point of best focus due to the curvature of the brain, which fundamentally limits the spatial extent of the cortex that can be studied through a single image. To improve the field of view achievable by optical imaging, we developed a new optical system adapted to the curvature of the non-human primate brain in study. This is achieved by using a curved detector in combination with an appropriate optical system of double Gaussian and aspherical lenses. Furthermore, to ensure a uniform and reliable illumination of the cortex, we have designed and built a new illumination system consisting of a ring of LEDs at four different wavebands. This static solution will enable imaging for the first time neuronal activity over a very large field of view (15-20mm) with high spatial and temporal resolution. Preliminary results show a significant increase of the area in focus of object imaged through the custom optics compared with the standard neuronal imaging optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Additive Manufacturing (AM) is often not considered as a manufacturing technique for mounting structures, because the assembly of optical systems usually requires small manufacturing tolerances, that are currently hard to achieve. But in comparison to conventional manufacturing techniques, AM has the advantage of being able to easily individualize each manufactured structure. Hence, mounting structures can be designed and adapted such, that the measured inner decentrations of conventionally manufactured optical elements to be mounted can be compensated. With this, the error budget contributions from optical elements can be minimized, either to relax their or their mounting structure tolerances or to reduce the overall system tolerance. To prove this concept, we designed an optical system to quantify small displacements of optical elements within a simple to replace additive manufactured mounting structure. From analysing the recorded intensity distribution, one can accurately quantify the lateral decentration of one of the optical elements in respect to the other one. The first experimental results show, that it is feasible to control and adjust the decentration of an optical element mounted in an additively manufactured structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using conventional Mueller matrix ellipsometry, the geometries of periodic nanostructures can be easily determined if the measurement fields are not smaller than the illumination spot size. Measurements on individual nanostructures smaller than this can be accounted for by imaging ellipsometry, which allows measuring all 16 Mueller matrix elements for each pixel in the camera of the imaging system. These so-called Mueller matrix images contain additional information about the spatial distribution of the sample’s polarizing properties that are useful in the characterization of individual nanostructures. We built an imaging Mueller matrix ellipsometry system for measurements in the visible regime. Our system allows for the analysis arm, which holds the CCD camera and the polarization state analyser, to be rotated freely around the sample. By this, measurements in reflection and in transmission can be performed at arbitrary angles of incidence between 37.5° and 90°. Additionally, we implemented a reflection mode for 0° angle of incidence. Using this setup, our goal is to characterize the shape of individual nanostructures much smaller than the illumination spot using the additional information from the Mueller matrix images. Thus, we designed and fabricated a sample containing various individual nanostructures with different geometrical features. The structures are of square or circular shape, ranging in size from 5 µm to 50 nm. Additionally, the square structures feature corner rounding with different radii for a transition between circle and square. With these structures, we systematically measure the influence of the shape on the Mueller matrix elements. We also investigate in using Mueller matrix images for the characterization of subwavelength sized features significantly smaller than the resolution limit of our microscope system at about 800 nm. First results show clear distinctions between opposing edges of the nanostructures in off-diagonal Mueller matrix images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inactivation or airborne pathogens inside closed spaces is a critical issue that raised overwhelmingly during the current SARS-CoV 2 pandemic. Among the different technologies to achieve air sanification, the ultraviolet germicidal irradiation is a trending technique, also due to the fast development of more and more effective ultra-violet LED sources, that are expected to replace the mercury vapor lamps in the next few years. The positioning of LEDs inside cavities with highly reflective surfaces permits an enhancement of the internal irradiance and the development of compact devices. Optical simulations, by means of ray tracing, are fundamental, since an accurate irradiance estimation in presence of multiple internal reflections, scattering, light leaks outside the cav-ity and the sources angular emission distribution is not possible with only analytical calculations. Ray tracing permits to model the spatial irradiance inside the cavity by varying the components parameters to maximize the inactivation rate as a function of the air flow field. We discuss, on the basis of the experience on several related projects, the advantages of using the numerical approach to simulate these devices, focusing the attention onto the critical parameters which must be controlled to retrieve a reliable estimation of the system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial light modulators are very common in many applications. They are used to implement amplitude, phase and polarization masks. In order to optimize its performance, it is important to characterize it, which means determining its Jones matrix. Here we present a method which consists on performing several intensity measurements for each gray level. It is simple enough that can be quickly performed, but offers much better results than previous methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Provence Adaptive optics Pyramid Run System (PAPYRUS) is a pyramid-based Adaptive Optics (AO) system that will be installed at the Coude focus of the 1.52m telescope (T152) at the Observatoire de Haute Provence (OHP). The project is being developed by PhD students and Postdocs across France with support from staff members consolidating the existing expertise and hardware into an RD testbed. This testbed allows us to run various pyramid wavefront sensing (WFS) control algorithms on-sky and experiment on new concepts for wavefront control with additional benefit from the high number of available nights at this telescope. It will also function as a teaching tool for students during the planned AO summer school at OHP. To our knowledge, this is one of the first pedagogic pyramid-based AO systems on-sky. The key components of PAPYRUS are a 17x17 actuators Alpao deformable mirror with a Alpao RTC, a very low noise camera OCAM2k, and a 4-faces glass pyramid. PAPYRUS is designed in order to be a simple and modular system to explore wavefront control with a pyramid WFS on sky. We present an overview of PAPYRUS, a description of the opto-mechanical design and the current status of the project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main problems of mixed reality devices is the physically correct representation of the luminance distribution for virtual objects and their shadows in the real world. In other words, restoring the correct distribution of scene luminance is one of the key parameters that allows solving the problem of correct interaction between the virtual and real worlds. The paper proposes methods for restoring the parameters of light sources. Also, the work is aimed at the creation and formation of criteria for the quality of visual perception, allowing to evaluate the synthesized image of mixed reality and to decide how natural it is from the point of view of the observer. In this article, surveys were used to create datasets using realistic software tools. In work the neural network is trained to recognize those areas of the image that do not fit into the environment and to classify this image as a class that causes visual discomfort. As criteria for the quality of visual perception, it is proposed to use estimates of the mismatch of the parameters of shadows cast by virtual objects and the distribution of luminance over these objects in the images of scenes containing models of "real" and "virtual" objects. The level of misalignment is estimated concerning the true lighting conditions of the real world. In this work, criteria for the quality of visual perception were formed and a neural network was trained, which makes it possible to decide and analyze the quality of a synthesized mixed reality image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an UV spectrograph with an in-vacuum resolution R=λ/Δλ≈20000 covering the 70-400 nm wavelength range with an échelle configuration. The instrument is now in the assembly, test and verification phase, as a milestone of the project LAPSUS (LAboratory Plasma Spectroscopy for Ultraviolet Space) funded to the Italian National Institute for Nuclear Physics – Laboratori Nazionali del Sud (INFN-LNS) by the Italian Space Agency in 2020. The goal of the project is to build an experimental atomic database in the UV spectral range, useful to interpret astrophysical spectra acquired by space missions. For this purpose the LAPSUS spectrograph will be coupled to the plasma traps operating at INFN-LNS, in order to apply high resolution spectroscopy to the emission of laboratory plasmas resembling astrophysical environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objects with microroughness surfaces, heterogeneous media are widely used in modern optical devices, for instance in various light guiding devices, car dashboards and other illuminating systems as elements of scenes aimed to generate images with photorealistic quality. Surface or volume scattering of these objects can have very complex shape with pronounced picks, far from ideal specular or ideal diffuse ones and typically described with Bi-Directional Scattering Functions (BSDF). If model reconstructing BSDF measuring device contains inaccurateness, it can result in errors of parameters reconstruction for scattering objects and serious damage for entire optical simulation for scattering objects. In simulation model it is very preferable simulate optical parts maximally close to real measuring device. Another problem is calculation speed of model aimed for BSDF simulation. BSDF measurement devices have very high angular and spatial resolutions that results in a very ineffective from viewpoint of simulation. Moreover, BSDF functions is multidimensional, so many calculations are required to fulfill. These problems impose careful choosing of simulation model, its reasonable simplifications, selection of proper ray tracing engine. In the paper the virtual prototyping of different industrial measurement devices is considered. These devices are quite different and thus require different approaches in computer models design. The results of virtual prototyping for several samples with complex scattering properties are presented as in numerical shape of angular intensity distribution as well in qualitative as photorealistic images. All virtual prototyping simulated results are compared with measurements to prove reliability and reasonability of built models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In oil and gas production, it is important to determine the type and chemical composition of the formation fluids in the pipeline in real time. Different types of the hydrocarbon formations present in the downhole environment have distinct optical characteristics, such as spectral transmittance, diffusion, fluorescence and refractive index. In the formation sampling and testing, knowing the refractive index of a downhole fluid makes it possible to identify the fluid (water, gas or oil), estimate various properties, such as salinity and crude oil density, and monitor the cleanup processes. The paper describes a new design for an on-axis inline process refractometer for continuous fluid measurements using a defocusing imaging technique. Design principles for the refractometer are discussed, the dependence of the defocus effect on the refractive index is calculated. Experimental energy distributions for different liquid samples are obtained and analysed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research focuses on the possibility of building an alternative mixed reality (MR) system, which will eliminate all its main causes of visual discomfort and form a model of the real world in the virtual space, as much as possible corresponding to it. The relationship between virtual reality (VR) systems, which are limited to models of their own virtual world, and MR systems, which add virtual objects to the real world, is examined. This paper presents an approach based on generating a point cloud of static objects using RGB-D sensors of the MR device, its further classification and segmentation, and then searching for a similar CAD object in the appropriate database. The found virtual analogue, after appropriate transformations, replaces the real object of the scene. To create more realism, RGB images of real objects can be superimposed as textures on the corresponding virtual scene objects. The paper proposes a multimodal approach, which consists in searching for objects with similar modalities in databases. A virtual scene created in a single space using this approach eliminates the possibility of forming unnatural lighting and observation conditions for all objects, including virtual copies of real objects and added virtual objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article discusses the implementation of a hardware neuroaccelerator based on FPGAs of the Cyclone IV series with 115,000 logical elements. An assessment of the requirements for the hardware resources of the computing platform is given. The features of the implementation of the neural network in the tasks of cognitive robotics and industrial production have been investigated in order to improve safety in the interaction of a robot and a person.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision systems used in modern industrial complexes, based on the analysis of multi and hyperspectral imaging. The transition to implementing the "Industry 4.0" program is not possible when using one type of data. The first control system used only the visible range image. They made it possible to analyze the trajectories of movement of objects, control product quality, carry out security functions (control of perimeter crossing), etc. The development of new industrial robotic cells and processing complexes using cognitive functions implying the receipt, analysis, and processing of heterogeneous data. The construction of a unified information field, which allows performing multidimensional operations with data, allows increasing the speed of decision-making and the implementation of automated robot-human systems at the level of an assistant working in a unified workspace. The use of machine vision systems analyzing information received in: visible (shape, the trajectory of movement, position of objects, etc.); near-infrared range (data is similar to visible, allows operation in dusty, foggy, low light conditions); far-infrared range - thermal (plotting temperature gradients, identifying areas of overheating); ultraviolet range (analysis of ionization sources, corona discharges, static charges, tags); X-ray and microwave ranges (analysis of the surface and internal structure of objects, allow the identification of defects); range and 3D sensors (construction of volumetric figures, analysis of the relative position of objects and their interaction), etc. Data analysis is often performed not by a single camera but by a group of sensors located not in a single housing. Primary data integration reduces the number of information channels while maintaining the functionality and accuracy of the analysis. The article discusses creating fusion images obtained by industrial sensors into a combined image containing joint data. Combining multi and hyperspectral imaging makes it possible to increase existing systems' efficiency and implement automated decision-making through their small reconfiguration. The article deals with searching for transformation matrices to create single combined images. A method for forming areas of significance obtained based on the analysis of various channels is proposed As methods of primary data processing, a multicriteria filtering method with an automated selection of processing parameters was used, based on the simultaneous minimization of the L2 norm and the first-order finite differences between the input implementation and the obtained values. The proposed method allows preserving the boundaries of objects and minimizing the noise component both on smooth local sections and near transitions. The transformations of color ranges are carried out using a modified multirange alfa-routing algorithm. The paper proposes an algorithm for fusion images with different coefficients and a criterion for their change in given local areas. On a set of test data obtained by cameras in visible (1024x1024 pixels, 8 bits), nearinfrared (800x600 pixels, 8 bits), thermal imaging (320x240, 8 bits), and depth maps (1024x1024, 8 bits in grayscale), presented examples of the formation of object masks and creating combined images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.