ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot Learning
نویسندگان
چکیده
Self-supervised learning (SSL) techniques have recently been integrated into the few-shot (FSL) framework and shown promising results in improving image classification performance. However, existing SSL approaches used FSL typically seek supervision signals from global embedding of every single image. Therefore, during episodic training FSL, these methods cannot capture fully utilize local visual information samples data structure whole episode, which are beneficial to FSL. To this end, we propose augment objective with a novel self-supervised Episodic Spatial Pretext Task (ESPT). Specifically, for each generate its corresponding transformed episode by applying random geometric transformation all images it. Based on these, our ESPT is defined as maximizing spatial relationship consistency between original one. With definition, ESPT-augmented promotes more transferable feature representations that features different their inter-relational structural input thus enabling model generalize better new categories only few samples. Extensive experiments indicate method achieves state-of-the-art performance three mainstay benchmark datasets. The source code will be available at: https://github.com/Whut-YiRong/ESPT.
منابع مشابه
Meta-Learning for Semi-Supervised Few-Shot Classification
In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corres...
متن کاملSemi-Supervised Few-Shot Learning with Prototypical Networks
We consider the problem of semi-supervised few-shot classification (when the few labeled samples are accompanied with unlabeled data) and show how to adapt the Prototypical Networks [10] to this problem. We first show that using larger and better regularized prototypical networks can improve the classification accuracy. We then show further improvements by making use of unlabeled data.
متن کاملFew-shot Learning
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a classifier has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity classifiers requires many iterative steps over many examples to perform well. Here, we propose ...
متن کاملSelf-Supervised Learning for Stereo Matching with Self-Improving Ability
Exiting deep-learning based dense stereo matching methods often rely on ground-truth disparity maps as the training signals, which are however not always available in many situations. In this paper, we design a simple convolutional neural network architecture that is able to learn to compute dense disparity maps directly from the stereo inputs. Training is performed in an end-to-end fashion wit...
متن کاملPrototypical Networks for Few-shot Learning
A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i8.26148