Learning to rectify for robust learning with noisy labels
نویسندگان
چکیده
Label noise significantly degrades the generalization ability of deep models in applications. Effective strategies and approaches (e.g., re-weighting or loss correction) are designed to alleviate negative impact label when training a neural network. Those existing works usually rely on pre-specified architecture manually tuning additional hyper-parameters. In this paper, we propose warped probabilistic inference (WarPI) achieve adaptively rectifying procedure for classification network within meta-learning scenario. contrast deterministic models, WarPI is formulated as hierarchical model by learning an amortization meta-network, which can resolve sample ambiguity be therefore more robust serious noise. Unlike approximated weighting function directly generating weight values from losses, our meta-network learned estimate vector input logits labels, has capability leveraging sufficient information lying them. The provides effective way rectify network, demonstrating significant improvement ability. Besides, modeling latent variable seamlessly integrated into SGD optimization We evaluate four benchmarks with noisy labels new state-of-the-art under variant types. Extensive study analysis also demonstrate effectiveness model.
منابع مشابه
Learning with Noisy Labels
In this paper, we theoretically study the problem of binary classification in the presence of random classification noise — the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability. Moreover, random label noise is class-conditional — the flip probability depends on the class. We provide two approaches to suitably modify any giv...
متن کاملLearning to Tag using Noisy Labels
In order to organize and retrieve the ever growing collection of multimedia objects on the Web, many algorithms have been developed to automatically tag images, music and videos. One source of labeled data for training these algorithms are tags collected from the Web, via collaborative tagging websites (e.g., Flickr, Last.FM and YouTube) or crowdsourcing applications (e.g., human computation ga...
متن کاملCost-Sensitive Learning with Noisy Labels
We study binary classification in the presence of class-conditional random noise, where the learner gets to see labels that are flipped independently with some probability, and where the flip probability depends on the class. Our goal is to devise learning algorithms that are efficient and statistically consistent with respect to commonly used utility measures. In particular, we look at a famil...
متن کاملIterative Learning with Open-set Noisy Labels
Large-scale datasets possessing clean label annotations are crucial for training Convolutional Neural Networks (CNNs). However, labeling large-scale data can be very costly and error-prone, and even high-quality datasets are likely to contain noisy (incorrect) labels. Existing works usually employ a closed-set assumption, whereby the samples associated with noisy labels possess a true class con...
متن کاملAgreeing to disagree: active learning with noisy labels without crowdsourcing
We propose a new active learning method for classification, which handles label noise without relying on multiple oracles (i.e., crowdsourcing). We propose a strategy that selects (for labeling) instances with a high influence on the learned model. An instance x is said to have a high influence on the model h, if training h on x (with label y = h(x)) would result in a model that greatly disagre...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Pattern Recognition
سال: 2022
ISSN: ['1873-5142', '0031-3203']
DOI: https://doi.org/10.1016/j.patcog.2021.108467