We will describe a large research effort at Lawrence Livermore National Laboratory (LLNL) aimed at using recent advances in deep learning, computational workflows, and computer architectures to develop an improved predictive model – the learned predictive model. Our goal is to first train these new models, typically cyclic generative adversarial networks, on simulation data to capture the theory implemented in advanced simulation codes. Later, we improve, or elevate, the trained models by incorporating experimental data. The training and elevation process both improves our predictive accuracy and provides a quantitative measure of uncertainty in such predictions. We will present work using inertial confinement fusion research and experiments at the National Ignition Facility (NIF) as a testbed for development. We will describe advances in machine learning architectures and methods necessary to handle the challenges of ICF science, including rich, multimodal data (images, scalars, time series) and strong nonlinearities. We will also cover state-of-the-art tools that we developed to manage our computational workflow. The tools manage a wide range of tasks, including developing enormous (PB-class) simulated training data sets, driving the training of learned models on simulation data, and elevating learned models through exposure to experiment. We will end by drawing ties to a broad range of scientific applications.