Many approaches have been suggested for automatic pedestrian and car detection to cope with the large variability regarding size, occlusion, background variability, and aspect. Current deep learning-based frameworks rely either on a proposal generation mechanism (e.g., “faster R-CNN”) or on inspection of image quadrants/octants (e.g., “YOLO”), which are then further processed with deep convolutional neural networks (CNN). We analyze the discriminative generalized Hough transform (DGHT), which operates on edge images, for pedestrian and car detection. The analysis motivates one to use the DGHT as an efficient proposal generation mechanism, followed by a proposal (bounding box) refinement and proposal acceptance or rejection based on a deep CNN. We analyze in detail the different components of our pipeline. Due to the low false negative rate and the low number of candidates of the DGHT as well as the high accuracy of the CNN, we obtain competitive performance to the state of the art in pedestrian and car detection on the IAIR database with much less generated proposals than other proposal-generating algorithms, being outperformed only by YOLOv2 fine-tuned to IAIR cars. By evaluations on further databases (without retraining or adaptation), we show the generalization capability of our pipeline.