From Event: SPIE LASE, 2023
Optical computing may one day provide fast and efficient calculation, with complex nonlinear interactions occurring at the speed of light. Here we use supercontinuum generation as an optical computing block within a neural network. Supercontinuum generation is an interesting candidate for optical computing because it can be adjusted to operate over a large range of nonlinearity in a single device by adjusting laser pulse parameters. Here, the optical block accepts inputs in the form of actuator settings for a chirped fiber Bragg grating pulse shaper, and outputs a measured supercontinuum spectrum. A small neural network before the optical block translates the calculation data into actuator settings, and another small network after the optical block translates the spectrum into the desired output. In this architecture, the optical block acts like a static, fixed-weight neural network, while the two translation networks are trained to adapt the optical block to the particular computational problem. As first demonstrations, we train the system to perform hand-writing identification, and use the supercontinuum as the decoder in an autoencoder network. As temperature-controlled pulse shaping is slow, we first emulate the supercontinuum with a neural network trained on spectra measured at randomized heater settings. The emulation network is frozen and inserted in place of the optical block for training of the translation layers on computer. The training data is converted to spectra with the optical device for retraining of the output translator, compensating for emulation errors.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kevin F. Lee and Martin E. Fermann, "Computation using shaped supercontinuum generation within a neural network," Proc. SPIE 12405, Nonlinear Frequency Generation and Conversion: Materials and Devices XXII, 124050H (Presented at SPIE LASE: February 01, 2023; Published: 14 March 2023); https://doi.org/10.1117/12.2659759.