Most algorithms of image processing treat an image homogeneously, while the human visual system processes a retinal image differently in the foveal and peripheral visual fields. It has been known that contrast sensitivity, spatial resolution and color discrimination of the visual system decrease from the fovea to the periphery. Moreover, recent psychophysical results showed that the spatial interactions between neighboring parts of a visual image are fundamentally different in the fovea and periphery. The perception of a visual stimulus can be suppressed or enhanced by the presence of other stimuli in its surround region. The spatial suppression in the periphery was much stronger than that in the fovea. In this report, we built an image processing model based on the neurophysiology of the human visual cortex to explore the possible impacts of the spatial interactions on image perception. We first adjusted model parameters to make the model have the same performance as human subjects had in perceiving foveal and peripheral images respectively. With those parameters, we simulated the image processing by the fovea and peripheral vision. We found that the strong spatial suppression in the periphery resulted in image boundary segmentation and salient target extraction. The response to a uniform image region was suppressed while the response to the boundary or salient regions remained. With this strategy, the visual information processed by the human cognition system is largely reduced. Based on these findings, we proposed a foveal-peripheral model for image compression and other possible applications.