Distilling Localization for Self-Supervised Representation Learning
نویسندگان
چکیده
Recent progress in contrastive learning has revolutionized unsupervised representation learning. Concretely, multiple views (augmentations) from the same image are encouraged to map close embeddings, while different images pulled apart.In this paper, through visualizing and diagnosing classification errors, we observe that current models ineffective at localizing foreground object, limiting their ability extract discriminative high-level features. This is due fact view generation process considers pixels an uniformly.To address problem, propose a data-driven approach for invariance backgrounds. It first estimates saliency then creates augmentations by copy-and-pasting onto variety of back-grounds. The still follows instance discrimination approach, so trained disregard background content focus on foreground. We study estimation methods, find most methods lead improvements With significant performance achieved self-supervised ImageNet classification, also object detection PASCAL VOC MSCOCO.
منابع مشابه
Self-Transfer Learning for Fully Weakly Supervised Object Localization
Recent advances of deep learning have achieved remarkable performances in various challenging computer vision tasks. Especially in object localization, deep convolutional neural networks outperform traditional approaches based on extraction of data/task-driven features instead of handcrafted features. Although location information of regionof-interests (ROIs) gives good prior for object localiz...
متن کاملProgressive Representation Adaptation for Weakly Supervised Object Localization
We address the problem of weakly supervised object localization where only image-level annotations are available for training object detectors. Numerous methods have been proposed to tackle this problem through mining object proposals. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model i...
متن کاملSupervised Learning for Self-Generating Neural Networks
In this paper, supervised learning for Self-Generating Neural Networks (SGNN) method, which was originally developed for the purpose of unsupervised learning, is discussed. An information analytical method is proposed to assign weights to attributes in the training examples if class information is available. This significantly improves the learning speed and the accuracy of the SGNN classiier. ...
متن کاملSelf-supervised Learning for Spinal MRIs
A significant proportion of patients scanned in a clinical setting have follow-up scans. We show in this work that such longitudinal scans alone can be used as a form of “free” self-supervision for training a deep network. We demonstrate this self-supervised learning for the case of T2-weighted sagittal lumbar Magnetic Resonance Images (MRIs). A Siamese convolutional neural network (CNN) is tra...
متن کاملDynamic guard zone for self-supervised learning
Ab.macl The dimension of tbe guard lOne along with its bounds for the Generalised Learning Algorithm (P"thak and Pal. 1986) is determined 1'01' optimum learning. The dimension is found to be dynamic dependlllg on the input sequence and tbe current estimates of classification parameters. Incorporation of tbis higher-order knowledge in a supervisory program improves the system performance. The pe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i12.17312