Developing accurate methods to determine bathymetry, bottom type, and water column optical properties from hyperspectral imagery is an ongoing scientific problem. Recent advances in deep learning have made convolutional neural networks (CNNs) a popular method for classification and regression on complex datasets. In this paper, we explore the use of CNNs to extract water depth, bottom type, and inherent optical properties (IOPs) from hyperspectral imagery (HSI) of water. We compare the CNN results to other machine learning algorithms: k-nearest-neighbors (KNN), stochastic gradient descent (SGD), random forests (RF), and extremely randomized trees (ET). This work is an inverse problem in which we seek to find the water properties than impact the reflectance and hence the collected HSI. The data includes optically shallow water, in which the bottom can be seen, and optically deep, in which the bottom cannot be seen and does not affect the reflectance. The scalar optical properties we find through regression are chlorophyll (CHL), colored dissolved organic matter (CDOM), total suspended sediments (TSS). For the case of the optically shallow water, we classify the bottom type among 114 different substrates. The results demonstrate that for finding water depth, bottom type, and IOPs in the case of optically shallow water, the CNN has better performance than other machine learning methods. For regression of the IOPs in optically deep water, the extremely randomized trees method outperforms the CNN. We further investigate the mechanisms of these results and discuss hyperparameter tuning strategies that may improve deep learning accuracy.