Deep convolutional neural networks have led to significant improvement over the previous salient object detection systems. The existing deep models are trained end-to-end and predicts salient objects by calculating pixel values, which results saliency maps are typically blurry. Our Pixel-wise Binary Classification Network (PBCN) focuses on binary classification in pixel level for salient object detection: saliency and background. In order to increase the resolution of output feature maps and get denser feature maps, Hybrid dilation convolution (HDC) is employed into PBCN. Then, Hybrid Dilation Spatial Pyramid Pooling (HDSPP) is proposed to extract denser multi-scale image representations. In HDSPP, it contains one 1×1 convolution and several dilated convolutions, with different rates and the output feature maps of the convolutions will be fused. Finally, softmax is introduced to implement the binary classification instead of sigmoid. Experiment, on four datasets, show that PBCN significantly improves the state-of-the-art.