I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) for image formation. The DNNs are trained from examples consisting of pairs of known objects and their corresponding raw images drawn from databases such as ImageNet, Faces-LFW and MNIST. The raw images are converted to complex amplitude maps and displayed on a Spatial Light Modulator (SLM.) After training, the DNNs are capable of recovering unknown objects, i.e. objects not previously included in the training sets, from the raw images in several scenarios: (1) phase objects retrieved from intensity after lensless propagation; (2) phase objects retrieved from intensity after lensless propagation at extremely low photon counts; and (3) amplitude objects retrieved from intensity in-focus after propagation through a strong scatterer. Recovery is robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set. In the talk I will discuss in more detail various methods to incorporate the physics into DNN training, and how DNN architecture and “hyper-parameters” (i.e., depth, number of units in each depth, presence or absence of skip connections, etc.) influence the quality of image recovery.
|