Much progress has been made in recent years in almost every research area within computer vision. This has led to an increased interest in applying computer vision algorithms to real-world problems, such as robot navigation, driver-less cars, and first-person video analysis. However, in each of these real-world applications, there are still significant challenges in processing degraded data, particularly when estimating motion from a single camera, which is commonly solved using optical flow. Previous studies have shown that state-of-the-art optical flow methods fail under realistic conditions of added noise, compression artifacts, and other types of degradations. In this paper we investigate strategies to improve the robustness of optical flow to these degradations by using the degradations and data augmentations in the training and fine-tuning stages of deep learning approaches to optical flow. We test these strategies using real and simulated data and attempt to illuminate this important area of research to the community.