Leveraging the power of deep neural networks, single-person pose estimation has made substantial progress throughout the last years. More recently, multi-person pose estimation has also become of growing importance, mainly driven by the high demand for reliable video surveillance systems in public security. To keep up with these demands, certain efforts have been made to improve the performance of such systems, which is yet limited by the insufficient amount of available training data. This work addresses this lack of labeled data: by diminishing the often faced problem of domain shift between synthetic images from computer game graphics engines and real world data, annotated training data shall be provided at zero labeling-cost. To this end, generative adversarial networks are applied as domain adaption framework, adapting the data of a novel synthetic pose estimation dataset to several real world target domains. State-of-the-art domain adaption methods are extended to meet the important requirement of exact content preservation between synthetic and adapted images. Experiments, that are subsequently conducted, indicate the improved suitability of the adapted data as human pose estimators trained on this data outperform those which are trained on purely synthetic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.