Shannon's information theory teaches us that the amount of information gained in a measurement is inversely proportional to its predictability. Difficult to capture, flash-like signals contain far more information than repetitive waveforms. The Photonic Time Stretch data acquisition invented two decades ago, has emerged as the most successful solution to single-shot measurements of transient events. This talk will review the fundamentals of photonic time stretch and its numerous applications in science, biomedicine and as mathematical inspiration for a new class of numerical algorithms.
Proc. SPIE. 11703, AI and Optical Data Sciences II
KEYWORDS: Signal to noise ratio, Digital signal processing, Data modeling, Wavelength division multiplexing, Receivers, Machine learning, Artificial intelligence, Nonlinear filtering, Performance modeling, Binary data
Wavelength Division Multiplexing (WDM) is the key technology in ultra-high capacity links that form the backbone of the internet. Hundreds or more data channels each at a different wavelength travel through a single fiber resulting in aggregate data rates exceeding many Terabits per second. The fundamental limit to the data transmission rate is the optical crosstalk between channels induced by the inevitable nonlinearity of the fiber. Traditional methods for compensating for the reduction in the bit error rate caused by the crosstalk include numerical backpropagation as well as nonlinear Volterra filter, both implemented in the digital domain at the receiver. Backpropagation through the canonical nonlinear Schrodinger equation is computationally expensive and beyond the capability of today’s DSP at the data rates that optical networks operate. Volterra filters scale superlinearly with an increasing number of taps and which in turn scale with the amount dispersion in the fiber. Therefore, they are not the ideal solution for high data rates. In this talk, we report on the application of machine learning, and neural networks in particular, on the compensation of optical crosstalk in WDM communication. We compare the performance of different machine learning models such as support vector machine (SVM), decision tree, convolutional neural network (CNN) in terms of the achievable bit error rate on both binary and multilevel modulated data. We further evaluate the sensitivity of the error rate to the resolution of the analog to digital converter (ADC) and to the signal to noise ratio as well as the latency of our algorithms