PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12903, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose and experimentally demonstrate a large-scale, high-performance photonic computing platform that simultaneously combines light scattering and optical nonlinearity. The core processing unit consists in a disordered polycrystalline lithium niobate slab bottom-up assembled from nanocrystals. Assisted by random quasiphase-matching, nonlinear speckles are generated as the complex interplay between the simultaneous linear random scattering and the second-harmonic generation based on the quadratic optical nonlinearity of the material. Compared to linear random projection, such nonlinear feature extraction demonstrates universal performance improvement across various machine learning tasks in image classification, univariate and multivariate regression, and graph classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical computing promises to play a major role in hardware chips dedicated to artificial intelligence (AI). Digital electronics, when employed in computing hardware, face the sunset of Moore’s law and the acknowledged end of Dennard Scaling (energy density of shrinking transistors). In response to these limitations, a paradigm shift towards nondigital processing is on the horizon. In optical computing devices for AI, the dominant mathematical operation is vectormatrix multiplication. It is typically limited to very small vector and matrix sizes. Most approaches don’t allow for significant scaling. In this context, our work focuses on the development of a silicon photonics tensor core that exhibits a unique scalability feature, enabling effective expansion to accommodate large matrix sizes. This scalability is deemed essential for the realization of meaningful AI accelerator products leveraging photonic hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolutional Neural Networks (CNNs) are employed in a plethora of fields, including computer vision, natural language processing, and speech recognition. We present an integrated photonic accelerator for CNNs based on the temporal-spatial interleaving of signals. This architecture supports 1D kernels, and can be extended to 2D convolutional kernels, providing scalability for complex networks. A supervised on-chip learning algorithm is employed to guarantee a reliable setting of convolutional weights against fabrication tolerances, thermal cross-talks, and changes in operating conditions. Overall, by leveraging photonics technology, the proposed accelerator significantly reduces hardware complexity while enabling high-speed processing and parallelism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As Internet-of-Things (IoT) devices continue to grow rapidly in number, developing energy-efficient memory solutions has become critically important. This paper introduces an innovative Phase Change Memory (PCM) architecture that can significantly reduce memory energy consumption in IoT devices. After highlighting the energy-inefficiency of current memory designs, we explore the possibilities of leveraging PCM. We demonstrate that the benefits of exploiting PCM are dependent on the working frequency of the CPU and show how PCM can surpass devices with SRAM and DRAM. As a replacement candidate for FLASH, PCM can also be utilized instead of SRAM and DRAM. We also demonstrate that based on the application we can also save more energy. Ongoing work focuses on deployment of this application dependency and enhancing energy efficiency of devices using PCM. Our PCM innovation enables improved functionality lifetimes for non-volatile IoT edge devices. This represents a major advance towards realizing widespread integration of photonics and electronics in IoT hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A magneto-optical device provides a nonvolatile operation by integrating ferromagnetic materials, which enables reconfigurable photonic integrated circuits. In addition, an artificial neural network with all-optical signal processing based on magneto-optical memory can be expected. We present integrated magneto-optical devices and discuss applications to photonic computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate an optical accelerator for convolutional neural networks using photonic tensor cores that achieves both state-of-the-art accuracy competitive with ideal floating point models, and unprecedented acceleration performance exceeding electronics by orders of magnitude. Across convolutional architectures and image datasets, the photonics-based hardware processes advanced inference workloads faster than alternate ASICs or GPUs. Additionally, power consumption and latency metrics are consistently lowered by using integrated optics - enabling real-time throughput while maintaining accuracy. By unlocking massively parallel and high bandwidth optical matrix operations, this approach promises to revolutionize compute-intensive CNN applications spanning medical imaging, scientific computing, autonomous systems and beyond. Fully integrated optical neural network accelerators now bring extraordinary speed, efficiency and scalability, opening new frontiers in artificial intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the advantages and disadvantages of adapting the U-Net architecture from a traditional GPU to a 4f free-space optical environment. The implementation is based on an optical-based acceleration called FatNet and thus this adaption is called Fat-U-Net. Fat-U-Net neglects the pooling operations in UNet, but maintains a similar number of weights and pixels per layer as U-Net. Our results demonstrate that the conversion to Fat-U-Net offers significant improvement in speed for segmentation tasks, with Fat-U-Net achieving a remarkable ×538 acceleration in inference compared to U-Net when both are run on optical devices and x37 acceleration in inference compared to the results provided by U-Net on GPU. The performance loss after conversion remains minimal in two datasets, with reductions of 4.24% in IoU for the Oxford IIIt pet dataset and 1.76% in IoU of HeLa cells nucleus segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The correct identification of minerals is crucial task for the exploration and exploitation of mineral resources, environmental monitoring, and industrial processes. In this article, we propose a hyperspectral imaging system and classification model to identify nine types of minerals. To accomplish this, we employed a hyperspectral shortwave infrared (SWIR) camera to capture hyperspectral images. We then introduce a convolutional neural network (CNN) architecture that considers only spectral data, complemented by a fully connected network for classification. To prevent overfitting, we implemented the dropout technique, which enables random deactivation of neurons during the backpropagation process. This results in improved performance during the training phase and a better generalization capacity. Training was optimized to minimize the categorical cross-entropy objective function, and the model was evaluated during training using an accuracy metric. Finally, we evaluated the results with the test data using accuracy, recall, and precision metrics, and achieved 98.52%, 98.25%, and 98.68%, respectively. Our source code is available at https://github.com/jcifuenr/Spec-CNN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Leveraging the power of machine learning, we introduce a breakthrough approach in high-volume manufacturing of photonics chips for advanced applications. Despite the transformative potential of photonics in many industries, its widespread adoption has been hindered by multiple challenges in the fabrication of complex integrated chips. We deployed machine learning models with diverse architectures at every stage of our manufacturing process to overcome these challenges. Inevitable variations in the fabrication process often lead to performance variability among photonics chips on a single wafer and across different wafers. We effectively overcome this challenge by employing a deep neural network to study the variability in the performance of individual chips, enabling us to predict the precise optimizations necessary to compensate for inevitable process variations. We describe our selection of the deep neural network architecture that addresses this challenge, our methodology for obtaining a high-quality dataset for training, and the enhancements in performance uniformity achieved through machine learning-enhanced production masks. Moreover, our use of machine learning has allowed us to bypass the time-consuming and labour-intensive process of optical chip testing, which significantly limits the scalability of photonic deployments in high-volume applications. As a powerful alternative to such testing, we developed a new technology that relies on a wafer probe that collects metrology data from multitude of locations on an undiced wafer. Utilizing a support vector machine (SVM), we analyze this metrology data and employ nonlinear binary classification to accurately predict the performance of hundreds of chips on a wafer across various metrics. We describe the approach employed for data collection to train the model, the trade-offs involved in hyperparameter tuning, and our methodology for evaluating the predictive quality of the binary classifiers. Additionally, we highlight the new capability of in-situ monitoring of wafer fabrication, which enables high-volume production and deployment of photonic solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral (HS) imaging enables the acquisition of color information beyond human perception by utilizing rich spatial-spectral information. However, existing approaches for HS imaging face challenges in practical application due to issues with sensitivity, resolution, and frame rate. In this paper, we report a high-sensitivity and high-resolution HS imaging operating at video-rate (30 fps). The HS imaging is achieved through a compressive-sensing approach using a spatial-spectral coded mask and an image reconstruction process, where the coded mask has spatially and spectrally random transmittance to reconstruct HS images. We defined the randomness required for the coded mask by simulating the effects of spatial and spectral randomness on the reconstruction results. A coded mask satisfying the spatial-spectral randomness were designed by optimizing the structure of Fabry-Pérot resonators and fabricated using a standard semiconductor manufacturing process. The fabricated coded mask was implemented on an image sensor to work as a camera. The experimentally measured sensitivity and spatial resolution are comparable to those of RGB cameras, and the frame rate reaches 30 fps at QVGA resolution with 27 wavelength bands. In addition, implementing in a commerciallyavailable digital camera, we have developed an user-friendly HS imaging with features such as auto-focus, autoexposure, and buttery powered. Our HS imaging, with its high performance and usability, holds great potential for various business scenarios, including consumer applications such as smartphones, drones, and IoT devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several modes exist in multimode fibers due to their large core diameter. Due to this property, they can transport images from one point to another. However, due to mode and polarization mixing, they randomize any information at their input to form speckle patterns. Original images from these patterns can be successfully reconstructed using methods like phase conjugation, transmission matrix measurement, and deep learning. Deep learning techniques are attractive as they are less time-consuming and do not require complex phase measurements and setups. Recently, researchers have used attention blocks with U-NET architecture to regenerate images from speckles. In this case, the average structural similarity index measure (SSIM) of reconstructed images is 0.8772. However, the network is complex and has high computation costs. As conditional generative adversarial networks (CGAN) produce better results for image-to-image translation problems, scientists have used them to reconstruct images from speckles with an average SSIM of 0.8686. We have designed a CGAN model that is fast (1 hour training time, 9.4ms inference time), stable (no mode collapse), and produces high-accuracy results. We have created our own data sets by sending 60000 (MNIST and Fashion MNIST) images at fiber input using a spatial light modulator while simultaneously recording speckle patterns on camera. The average SSIM achieved in our case is 0.9010 for 5000 unseen MNIST test images, which is greater than the previously reported values. The high-fidelity and fast imaging using our CGAN model offers the potential for developing thin, minimally invasive endoscopes using multimode fibers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here, we explores the forefront of optical dynamic real-time signal processing with the introduction of a Reconfigurable Complex Convolution Module (RCCM), leveraging Michelson Interferometric modulation and spatial light modulators (SLMs) for enhanced computational performance. By implementing a 4F-interferometer configuration, the RCCM facilitates simultaneous two-dimensional full complex convolutions, showcasing the ability for intricate amplitude and phase modulation within the Fourier domain. This approach significantly advances optical computing by demonstrating the RCCM’s capacity for parallel processing, superior speed, and energy efficiency. Our evaluation emphasizes the module’s impact on computational efficiency and throughput, highlighting its potential to revolutionize current computational paradigms by offering real-time reconfigurability, reduced energy consumption, and increased processing speeds. The integration of these optical computing techniques sets a new standard in addressing complex computational challenges, indicating a substantial leap towards high-speed, energy-efficient computing solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-contact measurements using digital cameras require a reliable camera calibration typically based on a pinhole camera model with a few lower-order distortions (typically up to 14 parameters in OpenCV). The assessment of the calibration quality is typically judged by the re-projection error (RPE). We use the forward propagation error (FPE) that determines the deviation in real world coordinates using parameters from the camera calibration. With our machine learning-inspired workflow we identify possible outliers for a more reliable camera calibration. We explore the quality of our camera calibration using RPE and FPE by a series of active (emissive display) and passive (illuminated print paper) checkerboard pattern as well as active cosine phase shifting patterns. We compare different camera models, different pattern, number of grid points, and different distances for the phase shifting pattern by comparing results from our simulations and experiments. We found that the 5 parameter OpenCV model was sufficient for a “good” camera calibration. In addition, the active checkerboard pattern displayed on a monitor is better than a passive checkerboard mounted on a stiff flat plate. Both, the active checkerboard and the active phase shifting pattern, are only limited by our target UHD monitor with about 0.37 mm pixel pitch in terms of FPE. We found that both active patterns give a good camera calibration by a correct generation of poses. The active checkerboard pattern shows good results of 0.16 pxl (RPE) and 0.06 mm (FPE) and can easily be interpreted with the FPE. Both values (RPE and FPE) are lower than the “real world uncertainty”.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-pixel imaging, which allows imaging with a single-pixel detector and a correlation method, can be accelerated by combining machine learning. In addition, the accuracy of the estimation was improved using the uncertainty of the estimated value by machine learning. The machine-learning algorithm was constructed from a physical perspective based on errors in the measurement system. On the other hand, to improve the reliability of the machine learning estimates, the uncertainty of the estimates was evaluated using standard deviation values derived by data augmentation. By using the value with the lowest uncertainty as the final estimate, we improved machine learning and achieved measurements with a small number of illuminations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ever-evolving field of materials design and discovery has been revolutionized by the emergence of data-driven algorithms for generative designs of materials and explorations of structure-property relationships. In particular, AIguided design frameworks have been successfully applied to the field of artificially structured electromagnetic composites known as metamaterials where their use has not only alleviated the computational burden associated with simulations based on first principles but also facilitated faster, more efficient sampling of vast parameter spaces to converge on a solution. MetaDesigner is a user-friendly web application which simplifies and automates the inverse design of metamaterials, i.e., it is a tool powered by generative and discriminative deep learning models for enabling ‘design-by-specification’. The practical application of this framework is exemplified by the successful end-to end design of a metamaterial broadband absorber as well as the demonstration of plasmonic metasurface for generating structural color ‘at will’. We envision that MetaDesigner's user-friendly interface will accommodate users with varying levels of expertise by providing access to multiple inverse algorithms and play a pivotal role in expediting the design and exploration of metamaterial-based devices. As this work is still under development and the technologies underpinning its development are expected to change over time, this abstract is aimed primarily at explaining the overall philosophy and design goals of this project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Lensless Holographic Microscopy (DLHM) is a phase imaging modality that omits the use of lenses or other bulky hardware to recover information from microscopic objects. Deep learning models have been recently used to substitute traditional DLHM reconstruction algorithms and classify samples from the reconstructed amplitude and phase images. In this work, we have investigated using these models to classify diatom samples, circumventing the whole reconstruction process altogether. We have validated our approach using a simulated DLHM dataset by comparing the performance of three typical image-processing learning-based models: AlexNet, VGG16, and ResNet-18.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analog photonic processing is one of the attractive computation engines for machine learning. Here, we show recent progress on scaling up analog photonic platforms including a large-scale WDM-based matrix-vector processor and onchip photonic linear processor, as well as their application to reservoir computing and hardware-oriented training. Our approach scales up the photonic analog processing towards the fundamental Nyquist limit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here, we demonstrate how low-dimensional materials can revolutionize photodetector and electro-optic modulator performance by adjusting critical properties such as bandgap, work function, and electron mobility. Utilizing scaling-length theory, we detail our progress in creating photodetectors with high gain-bandwidth products, including the integration of a metallic slot in a silicon photonic waveguide to improve carrier-lifetime-to-transit-time ratios. We also unveil a zero-bias operable 2D material PN junction photodetector that significantly reduces dark currents, enhancing noise-equivalent power performance. Additionally, our findings explore the compatibility of these advances with flexible substrates, potentially integrating them into Photonic Integrated Circuits (PICs) for compact, efficient, and integrated devices. This work not only aligns with current nanophotonic trends and wearable technology but also aims to redefine optoelectronic device efficiency for the next generation of PICs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extremizing a quadratic form can be computationally straightforward or difficult depending on the feasible domain over which variables are optimized. For example, maximizing E = xTVx for a real-symmetric matrix 𝑉 with 𝑥 constrained to a unit ball in 𝑅𝑁 can be performed simply by finding the maximum (principal) eigenvector of 𝑉, but can become computationally intractable if the domain of 𝑥 is limited to corners of the ±1 hypercube in 𝑅𝑁 (i.e., 𝑥 is constrained to be a binary vector). Many gain-loss physical systems, such as coherently coupled arrays of lasers or optical parametric oscillators, naturally solve minimum/maximum eigenvector problems (of a matrix of coupling coefficients) in their equilibration dynamics. In this paper we discuss recent case studies on the use of added nonlinear dynamics and real-time feedback to enforce constraints in such systems, making them potentially useful for solving difficult optimization problems. We consider examples in both classical and quantum regimes of operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Untrained Physics-based Deep Learning (DL) methods for digital holography have gained significant attention due to their benefits, such as not requiring an annotated training dataset, and providing interpretability since utilizing the governing laws of hologram formation. However, they are sensitive to the hard-to-obtain precise object distance from the imaging plane, posing the Autofocusing challenge. Conventional solutions involve reconstructing image stacks for different potential distances and applying focus metrics to select the best results, which apparently is computationally inefficient. In contrast, recently developed DL-based methods treat it as a supervised task, which again needs annotated data and lacks generalizability. To address this issue, we propose reverse-attention loss, a weighted sum of losses for all possible candidates with learnable weights. This is a pioneering approach to addressing the Autofocusing challenge in untrained deep-learning methods. Both theoretical analysis and experiments demonstrate its superiority in efficiency and accuracy. Interestingly, our method presents a significant reconstruction performance over rival methods (i.e. alternating descent-like optimization, non-weighted loss integration, and random distance assignment) and even is almost equal to that achieved with a precisely known object distance. For example, the difference is less than 1dB in PSNR and 0.002 in SSIM for the target sample in our experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic optical elements (HOEs) are based on the principle of holography, which can implement arbitrary functions such as convex lenses and concave mirrors. The performance of HOEs is expected to be enhanced by a cooperative operation of multiple HOEs. However, the design of multiple HOEs is difficult to achieve with conventional design methods such as ray tracing software. We will introduce the HOE design of the cooperative operation by machine learning. In this work, we implemented a diffractive deep neural network (D2NN) to realize the cooperative operation by multiple HOEs at the visible wavelengths. D2NN is a kind of optical neural network that is represented by light propagation, and it is implemented by multiple DOEs that can represent arbitrary optical functions. However, multiple-layer HOEs cause noise to be overlapped on the output wavefront since the HOE generates unnecessary lights such as the direct light and high-order lights. Therefore, we implemented the D2NN consisting of two layers of HOEs by an off-axis D2NN, which avoids this obstacle. The two-layer HOEs were trained to perform a classification task of handwritten digits as a task. The trained D2NN model with HOEs was evaluated in a numerical simulation, achieving 87.1% accuracy in the simulation. The method enables the design of cooperative operation of multiple HOEs, it enables HOEs to achieve more complex and higher performance functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Needle-thin optical fibre imaging systems using multimode fibre show considerable potential for facilitating advanced medical endoscopes that can capture high-resolution images in challenging regions of the body, such as the brain or blood vessels. However, these systems experience significant optical distortion whenever the fibre is disturbed. To address this, it is crucial to calibrate the fibre transmission matrix (TM) in vivo immediately before conducting the imaging process since TM is highly sensitive to temperature variations and bending. We therefore present a reflection-mode TM reconstruction model using U-net based convolutional neural networks with a custom loss function used for arbitrary global phase compensation, which reduced computational time to ~1s. We demonstrated this model by reconstructing 64 × 64 complex-valued fibre TMs through a reflection-mode optical fibre system and tested by reconstructing widefield images with ≤ 9% image error. We anticipate this neural network-based TM reconstruction model with the custom loss function designed will lead to new AI models that deal with phase information, for example in imaging through optical fibre, holographic imaging and projection, where both phase control and speed are required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With applications from photonics to seismology, wave scattering is ubiquitous in physics. Yet, to study scattering in highly heterogeneous materials, evidence must be obtained from theoretical approximations and surface measurements. Numerical approaches can offer an insight into the wave behavior deep within a complex structure; however, the large scale, with respect to the short wavelength of light, of most systems of interest makes photonic simulations some of the most challenging numerical problems. Memory and time constraints typically limit coherent light scattering calculations to the micrometer scale in 2D and to the nanoscale in 3D. The study of large photonic structures, or scattering in biological samples larger than a few cells, remains out of reach of conventional computational methods. Here, we highlight a connection between the wave equation that governs light-scattering and the structure of a recurrent network. A one-to-one correspondence enables us to leverage efficient machine learning infrastructure and address coherent scattering problems on an unprecedented scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The curse of dimensionality describes the issues associated with sampling high-dimensional complex systems, that entail exponential data-volume growth and making the acquiring of representative data difficult. Sparse data and highdimensional spaces pose challenges due to unattainable sampling resolution. Autoencoders, a specific class of neural networks, offer a promising strategy by learning compressed representations through nonlinear encoding and decoding, capturing essential features while discarding less relevant information. In this work, we employ an autoencoder to characterize the complex dynamics of a noise-like-pulse (NLP) fiber laser cavity. In order to achieve this, we leverage dropout at both the input and output layers to effectively deactivate neurons that have no data sample. By establishing links between the input polarization, controlled by three waveplates, and the broadening of the output spectrum, we discover that only sparsely distributed polarizations regions are associated with the NLP regimes (less than 5%). To map the whole polarization space, we scan along two polarization dimensions defined by “slices” and, while recording slices along the third dimension, the number of random samples exponentially decreases from slice to slice, requiring only 30 % of the original data. Our neural network is able to predict regions of interest even in presence of this exponential decay of sampling density along one dimension. Our approach demonstrates the significant impact of autoencoders and dynamic sampling via dropouts in efficiently capturing relevant information from vast datasets and we anticipate our results can be applied to a wide range of ultrafast systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, generative AI has made remarkable strides, enabling the creation of exceptional quality novel designs and images. This study aims to enhance the performance of a conditional autoencoder, a type of generative deep learning framework. Our primary focus lies in applying these techniques to improve the design of metagratings. By harnessing the power of generative modeling and Bayesian optimization, we can generate optimized designs for metagratings, thereby enhancing their functionality and efficiency. Additionally, through the use of transfer learning, we adapt the network originally designed for transverse-electric (TE) modes to encompass transverse-magnetic (TM) modes. This adaptation spans a wide range of deflection angles and operating wavelengths, with minimal additional training data required. This versatile black-box approach has broad applications in the inverse design of various photonic and nanophotonic devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.