The shearlet representation forms a tight frame which decomposes a function into scales and directions, and is optimally sparse in representing images with edges. An image fusion method is proposed based on the shearlet transform. Firstly, transform the image A and image B by the shearlets. Secondly, a pulse couple neural network (PCNN) is used for the frequency subbands, which uses the number of output pulses from the PCNN's neurons to select fusion coefficients. Finally, an inverse shearlet transform is applied on the new fused coefficients to reconstruct the fused image. Some experiments are performed in images such as multi-focus images, multi-sensor images, medical images and multispectral images comparing the proposed algorithm with the wavelet, contourlet and nonsubsampled contourlet method based on the PCNN. The experimental results show that the proposed algorithm can not only extract more important visual information from source images, but also effectively avoid the introduction of artificial information. It significantly outperforms the traditional multiscale transform image fusion methods in terms of both visual quality and objective evaluation criteria such as MI and QAB/F.