Presentation + Paper
7 June 2024 Utilizing grounded SAM for self-supervised frugal camouflaged human detection
Author Affiliations +
Abstract
Visually detecting camouflaged objects is a hard problem for both humans and computer vision algorithms. Strong similarities between object and background appearance make the task significantly more challenging than traditional object detection or segmentation tasks. Current state-of-the-art models use either convolutional neural networks or vision transformers as feature extractors. They are trained in a fully supervised manner and thus need a large amount of labeled training data. In this paper, both self-supervised and frugal learning methods are introduced to the task of Camouflaged Object Detection (COD). The overall goal is to fine-tune two COD reference methods, namely SINet-V2 and HitNet, pre-trained for camouflaged animal detection to the task of camouflaged human detection. Therefore, we use the public dataset CPD1K that contains camouflaged humans in a forest environment. We create a strong baseline using supervised frugal transfer learning for the fine-tuning task. Then, we analyze three pseudo-labeling approaches to perform the fine-tuning task in a self-supervised manner. Our experiments show that we achieve similar performance by pure self-supervision compared to fully supervised frugal learning.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Matthias Pijarowski, Alexander Wolpert, Martin Heckmann, and Michael Teutsch "Utilizing grounded SAM for self-supervised frugal camouflaged human detection", Proc. SPIE 13039, Automatic Target Recognition XXXIV, 1303909 (7 June 2024); https://doi.org/10.1117/12.3021694
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Image segmentation

Data modeling

Object detection

Performance modeling

Animals

Statistical modeling

Back to Top