Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.