نتایج جستجو برای: Weak Learner

تعداد نتایج: 155822  

2008
Maya Hristakeva

Boosting is an ensemble based method which attempts to boost the accuracy of any given learning algorithm by applying it several times on slightly modi ed training data and then combining the results in a suitable manner. The boosting algorithms that we covered in class were AdaBoost, LPBoost, TotalBoost, SoftBoost, and Entropy Regularized LPBoost. The basic idea behind these boosting algorithm...

1998
Matthew P. Carter Javed A. Aslam

A weak PAC learner is one which takes labeled training examples and produces a classifier which can label test examples more accurately than random guessing. A strong learner (also known as a PAC learner), on the other hand, is one which takes labeled training examples and produces a classifier which can label test examples arbitrarily accurately. Schapire has constructively proved that a stron...

2001
Samuel Kutin Partha Niyogi

We provide an analysis of AdaBoost within the framework of algorithmic stability. In particular, we show that AdaBoost is a stabilitypreserving operation: if the “input” (the weak learner) to AdaBoost is stable, then the “output” (the strong learner) is almost-everywhere stable. Because classifier combination schemes such as AdaBoost have greatest effect when the weak learner is weak, we discus...

2006
Debasis Chakraborty

Recent works on ensemble methods like Adaptive Boosting have been applied successfully in many problems. Ada-Boost algorithm running on a given weak learner several times on slightly altered data and combining the hypotheses in order to achieve higher accuracy than the weak learner. This paper presents an expert system that boosts the performance of an ensemble of classifiers. In, Boosting, a s...

Journal: :CoRR 2009
Boris Yangel

An approach to the acceleration of parametric weak classifier boosting is proposed. Weak classifier is called parametric if it has fixed number of parameters and, therefore, can be represented as a point in multidimensional space. Genetic algorithm is used to learn parameters of such classifier. Proposed approach also takes cases when effective algorithm for learning some of the classifier para...

Journal: :I. J. Artificial Intelligence in Education 2007
Yanghee Kim

This study investigated the desirable characteristics of anthropomorphized learning-companion agents for college students. First, interviews with six undergraduates explored their concepts of desirable learning companions. The interviews yielded agent competency, agent personality, and interaction control. Next, a controlled experiment examined whether learner competency (strong vs. weak) would...

Journal: :Journal of Machine Learning Research 2007
Ofer Melnik Yehuda Vardi Cun-Hui Zhang

Rankboost has been shown to be an effective algorithm for combining ranks. However, its ability to generalize well and not overfit is directly related to the choice of weak learner, in the sense that regularization of the rank function is due to the regularization properties of its weak learners. We present a regularization property called consistency in preference and confidence that mathemati...

2006
Hugh A. Chipman Edward I. George Robert E. McCulloch

We develop a Bayesian “sum-of-trees” model, named BART, where each tree is constrained by a prior to be a weak learner. Fitting and inference are accomplished via an iterative backfitting MCMC algorithm. This model is motivated by ensemble methods in general, and boosting algorithms in particular. Like boosting, each weak learner (i.e., each weak tree) contributes a small amount to the overall ...

2004
Nuanwan Soonthornphisaj Boonserm Kijsirikul

This paper presents a semi-supervised learning algorithm called Iterative-Cross Training (ICT) to solve the Web pages classification problems. We apply Inductive logic programming (ILP) as a strong learner in ICT. The objective of this research is to evaluate the potential of the strong learner in order to boost the performance of the weak learner of ICT. We compare the result with the supervis...

2012
Chi Zhang

Boost is a kind of method for improving the accuracy of a given learning algorithm by combining multiple weak learners to “boost” into a strong learner. The gist of AdaBoost is based on the assumption that even though a weak learner cannot do good for all classifications, each of them is good at some subsets of the given data with certain bias, so that by assembling many weak learner together, ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید