A new video inpainting algorithm is proposed for removing unwanted or erroneous objects from video data. The
proposed algorithm fills a mask region with source blocks from unmasked areas, while keeping spatio-temporal
consistency. First, a 3-dimensional graph is constructed over consecutive frames. It defines a structure of nodes
over which the source blocks are pasted. Then, we form temporal block bundles using the motion information.
The best block bundles, which minimize an objective function, are arranged in the 3-dimensional graph. Extensive
simulation results demonstrate that the proposed algorithm can yield visually pleasing video inpainting results
even for dynamic sequences.