Selective Transfer Classification Learning With Classification-Error-Based Consensus Regularization

نویسندگان

چکیده

Transfer learning methods are conventionally conducted by utilizing abundant labeled data in the source domain to build an accurate classifier for target with scarce data. However, most current transfer assume that all relevant domain, which may induce negative effect when assumption becomes invalid as many practical scenarios. To tackle this issue, key is accurately and quickly select correlated corresponding weights. In paper, we make use of least square-support vector machine (LS-SVM) framework identifying their weights from domain. By keeping consistency between distributions classification errors both domains, first propose classification-error-based consensus regularization (CCR), can guarantee performance improvement classifier. Based on approach, a novel CCR-based selective method (CSTL) then developed autonomously choose exploit transferred knowledge solving LS-SVM based objective function. This minimizes leave-one-out cross-validation error despite training The advantages CSTL demonstrated evaluating its public image text datasets comparing it state-of-the-art methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Transductive Classification via Local Learning Regularization

The idea of local learning, classifying a particular point based on its neighbors, has been successfully applied to supervised learning problems. In this paper, we adapt it for Transductive Classification (TC) problems. Specifically, we formulate a Local Learning Regularizer (LL-Reg) which leads to a solution with the property that the label of each data point can be well predicted based on its...

متن کامل

Transfer learning for text classification

Linear text classification algorithms work by computing an inner product between a test document vector and a parameter vector. In many such algorithms, including naive Bayes and most TFIDF variants, the parameters are determined by some simple, closed-form, function of training set statistics; we call this mapping mapping from statistics to parameters, the parameter function. Much research in ...

متن کامل

Rate-optimal Meta Learning of Classification Error

Meta learning of optimal classifier error rates allows an experimenter to empirically estimate the intrinsic ability of any estimator to discriminate between two populations, circumventing the difficult problem of estimating the optimal Bayes classifier. To this end we propose a weighted nearest neighbor (WNN) graph estimator for a tight bound on the Bayes classification error; the Henze-Penros...

متن کامل

Discriminative Learning for Minimum Error Classification

Recently, due to the advent of artificial neural networks and learning vector quantizers, there is a resurgent interest in reexamining the classical techniques of discriminant analysis to suit the new classifier structures. One of the particular problems of interest is minimum error classification in which the misclassification probability is to be minimized based on a given set of training sam...

متن کامل

Classification based on 3-similarity

Similarity concept, finding the resemblance or classifying some groups of objects and study their common properties has been the interest of many researchers. Basically, in the studies the similarity between two objects or phenomena, 2-similarity in our words, has been discussed. In this paper, we consider the case when the resemblance or similarity among three objects or phenomena of a set, 3-...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE transactions on emerging topics in computational intelligence

سال: 2021

ISSN: ['2471-285X']

DOI: https://doi.org/10.1109/tetci.2019.2892762