Paper
25 March 2024 Adversarial feature calibration network for few-shot learning
Author Affiliations +
Proceedings Volume 13089, Fifteenth International Conference on Graphics and Image Processing (ICGIP 2023); 130891K (2024) https://doi.org/10.1117/12.3021502
Event: Fifteenth International Conference on Graphics and Image Processing (ICGIP 2023), 2023, Suzhou, China
Abstract
Big data is the bottleneck of deep learning in the current scenario, as data itself comes with the drawbacks of expensive collection costs and even the inability to gather it at all. How to achieve good learning performance in situations with insufficient sample size has increasingly gained attention. The practical value of small sample learning is self-evident, as this technique aims to learn concepts of new classes through a few labeled samples. Data augmentation is the most intuitive approach to address small sample learning, and recent works have demonstrated its feasibility by proposing various data synthesis models. However, data augmentation during model training has a significant drawback. It can easily lead to over-fitting since it relies on a biased distribution formed by only a few training examples. In this paper, we propose a method to generate high-quality pseudo-samples by calculating a regularization factor that constrains the model generator based on statistical distribution information from a large number of classes.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Zihao Jia, Jin Deng, Ying Huang, Yanyan Chen, and Wei Luo "Adversarial feature calibration network for few-shot learning", Proc. SPIE 13089, Fifteenth International Conference on Graphics and Image Processing (ICGIP 2023), 130891K (25 March 2024); https://doi.org/10.1117/12.3021502
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Education and training

Gallium nitride

Data modeling

Calibration

Statistical modeling

Feature extraction

Neural networks

Back to Top