Presentation + Paper
13 June 2023 A few shots at few shot learning
Donald Waagen, Don Hulsey, David Gray
Author Affiliations +
Abstract
Deep learning models are currently the models of choice for image classification tasks. But large scale models require large quantities of data. For many tasks, acquiring a sufficient quantity of training data is not feasible. Because of this, an active area of research in machine learning is the field of few sample learning or few shot learning (FSL), with architectures that attempt to build effective models for a low-sample regime. In this paper, we focus on the established few-shot learning algorithm developed by Snell et al.1 We propose an FSL model where the model is produced via traditional encoding with the backend output layer replaced with the prototypical clustering classifier of Snell et al.. We hypothesize that this algorithm’s encoding structure produced by this training may be equivalent to models produced by traditional cross-entropy deep learning optimization. We compare few shot classification performance on unseen classes between models trained using the FSL training paradigm and our hybrid models trained traditionally with softmax, but modified for FSL use. Our empirical results indicate that traditionally trained models can be effectively re-used for few sample classification.
Conference Presentation
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Donald Waagen, Don Hulsey, and David Gray "A few shots at few shot learning", Proc. SPIE 12521, Automatic Target Recognition XXXIII, 125210H (13 June 2023); https://doi.org/10.1117/12.2661719
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Education and training

Machine learning

Statistical analysis

Deep learning

Prototyping

Data modeling

Network architectures

Back to Top