In-bed pose estimation is of great value in current health-monitoring systems. In this paper, we solve a crossdomain pose estimation problem, in which a fully annotated uncovered training set is used for pose estimation learning, and a large-scale unlabelled data set of covered images is employed for unsupervised domain adaptation. To tackle this challenging problem, we propose a multi-level domain adaptation framework, which learns a generalizable pose estimation network based three levels of adaptation. We evaluate the proposed framework on a public in-bed pose estimation benchmark. The results demonstrate that our proposed framework can effectively generalize the learned knowledge from the uncovered source domain to the covered target domain for privacy-protected in-bed pose estimation.
Multispectral pedestrian detection has attracted extensive attention, as paired RGB-thermal images can provide complementary patterns to deal with illumination changes in realistic scenarios. However, most of the existing deep-learning-based multispectral detectors extract features from RGB and thermal inputs separately, and fuse them by a simple concatenation operation. This fusion strategy is suboptimal, as undifferentiated concatenation for each region and feature channel may hamper the optimal selection of complementary features from different modalities. To address this limitation, in this paper, we propose an attention-based cross-modality interaction (ACI) module, which aims to adaptively highlight and aggregate the discriminative regions and channels of the feature maps from RGB and thermal images. The proposed ACI module is deployed into multiple layers of a two-branch-based deep architecture, to capture the cross-modal interactions from diverse semantic levels, for illumination-invariant pedestrian detection. Experimental results on the public KAIST multispectral pedestrian benchmark show that the proposed method achieves state-of-the-art detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.