The curse of dimensionality describes the issues associated with sampling high-dimensional complex systems, that entail exponential data-volume growth and making the acquiring of representative data difficult. Sparse data and highdimensional spaces pose challenges due to unattainable sampling resolution. Autoencoders, a specific class of neural networks, offer a promising strategy by learning compressed representations through nonlinear encoding and decoding, capturing essential features while discarding less relevant information. In this work, we employ an autoencoder to characterize the complex dynamics of a noise-like-pulse (NLP) fiber laser cavity. In order to achieve this, we leverage dropout at both the input and output layers to effectively deactivate neurons that have no data sample. By establishing links between the input polarization, controlled by three waveplates, and the broadening of the output spectrum, we discover that only sparsely distributed polarizations regions are associated with the NLP regimes (less than 5%). To map the whole polarization space, we scan along two polarization dimensions defined by “slices” and, while recording slices along the third dimension, the number of random samples exponentially decreases from slice to slice, requiring only 30 % of the original data. Our neural network is able to predict regions of interest even in presence of this exponential decay of sampling density along one dimension. Our approach demonstrates the significant impact of autoencoders and dynamic sampling via dropouts in efficiently capturing relevant information from vast datasets and we anticipate our results can be applied to a wide range of ultrafast systems.
|