Deep neural networks have achieved impressive performance in problems of object detection and object category classifications. To perform efficiently though, such methods typically require a large number of training samples. Unfortunately, this requirement is highly impractical or impossible in applications such as hyperspectral classification where it is expensive and labor intensive to generate labeled data for training. A few ideas have been proposed in the literature to address this problem such as transfer learning and domain adaptation. In this work, we propose an alternative strategy to reduce the number of network parameters based on Structured Receptive Field Networks (SRFN), a class of convolutional neural networks (CNNs) where each convolutional filter is a linear combination from a predefined dictionary. To better exploit the characteristics of hyperspectral data to be learned, we choose a filter dictionary consisting of directional filters inspired by the theory of shearlets and we train a SRFN by imposing that the convolutional filters form sparse linear combinations in such dictionary. The application of our SRFN to problems of hyperspectral classification shows that this approach achieves very competitive performance as compared to conventional CNNs.
In this paper, we use the ideas presented in  to construct application-targeted convolutional neural network architectures (CNN). Specifically, we design frame filter banks consisting of sparse kernels with custom-selected orientations that can act as finite-difference operators. We then use these filter banks as the building blocks of structured receptive field CNNs  to compare baseline models with more application-oriented methods. Our tests are done on Google's Quick, Draw! data set.