A video stabilization method based on a new concept inspired by the human visual system is presented. The human eye provides a stable scene by continuously changing the eye’s orientation in a way that always places the focused target in the center of one’s view. Similar to the human eye, the proposed algorithm focuses only on a single target object within a scene and stabilizes the target on the two-dimensional image plane by rotating a camera in three-dimensional space while most previous methods consider all objects in a video. The rotational angles of the camera along the x and y axes are simply predicted from a translational motion vector of the target object on the image plane. Hence, the proposed algorithm can provide a vivid video as if it was seen through the eye. Efficient video stabilization by approximating the human visual system is introduced, a practical method for real-time devices. Experimental results demonstrate that the visual feelings of the compensated videos are different depending on the selected target object and the approximating method provides a reasonable performance.