This paper presents a new technique for generating multiview video form a two-view video sequence. For each stereo frame in the two-view video sequence, our system first estimates the corresponding point of each pixel by template matching, and then constructs the disparity maps required for view interpolation. To generate accurate disparity maps, we use adaptive-template matching, where the template size depends on local variation of image intensity and the knowledge of object boundary. Then, both the disparity maps and the original stereo videos are compressed to reduce the storage size and the transfer time. Based on the disparity, our system can generate, in real time, a stereo video of desired perspective view by interpolation or extrapolation from the original views, in response to the head movement of the user. Compared to the traditional method of capturing multiple perspective video directly, the approach of view interpolation can eliminate the problems caused by the requirement of synchronizing multiple video inputs and the large amount of video data needed to be stored and transferred.