In this paper, we present a novel system for simultaneously performing segmentation and 2D pose motion recovery for
the articulated object in a video sequence. The system first preprocesses pixels into superpixels to reduce the number of
nodes which largely affects the computational complexity of later optimizations. By starting from true pose estimation
obtained with user assistants on each key frame, a parallel pose tracking procedure, whose energy function considers
boundary, appearance and pose prior information as well, is conducted forward and backward on in-between frames.
With different searching strategies, multiple pose candidates are inferred to help recover missed true poses. Finally, by
solving the cost function of the pose motion recovery, which exploits the temporal coherence of object movement, the
pose motion and the video object are produced at the mean time. As a parameterized tree-based articulated model drawn
by the user is applied to denote the pose, our method is generic and can be used for any articulated object.
In this paper, we present a new method bases on dynamic graph cut and captures both the shape and the nature
information of the image for interactive image segmentation. While traditional interactive graph cut approaches for
image segmentation are often successful, they may fail in camouflage. Prior shape knowledge can largely mitigate this
problem. In this paper, two kinds of shape priors are taken into account to obtain more accurate results. In order to use
the information from user input more effectively, a weight function is introduced to control the relative importance of
shape knowledge. Then, a one-shot fully dynamic graph cut algorithm is introduced to minimize the energy function, and
during this procedure, only a subset of pixels in the image is considered, which greatly reduces the complexity of
dynamic graph cut algorithm. Extensive experiments, including comparisons with some state-of-the-arts, show the
effectiveness of our methods in improving the segmentation performance and saving the processing time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.