نتایج جستجو برای: overfitting

تعداد نتایج: 4333  

2011
Frederic T. Stahl Max Bramer

Ensemble learning techniques generate multiple classifiers, so called base classifiers, whose combined classification results are used in order to increase the overall classification accuracy. In most ensemble classifiers the base classifiers are based on the Top Down Induction of Decision Trees (TDIDT) approach. However, an alternative approach for the induction of rule based classifiers is th...

Journal: :JCP 2008
Masaki Murata Tomohiro Mitsumori Kouichi Doi

We first analyzed protein names using various dictionaries and databases and found five problems with protein names; i.e., the treatment of special characters, the treatment of homonyms, cases where the protein-name string may be a substring of a different protein-name string, cases where one protein exists in different organisms, and the treatment of modifiers. We confirmed that we could use a...

2013
Nitish Srivastava

Improving Neural Networks with Dropout Nitish Srivastava Master of Science Graduate Department of Computer Science University of Toronto 2013 Deep neural nets with a huge number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining many d...

2016
Haoqi Fan Yu Zhang Kris M. Kitani

Training a classifier with only a few examples remains a significant barrier when using neural networks with a large number of parameters. Though various specialized network architectures have been proposed for these k-shot learning tasks to avoid overfitting, a question remains: is there a generalizable framework for the k-shot learning problem that can leverage existing deep models as well as...

Journal: :CoRR 2013
Muzhou Hou Moon Ho Lee

In this paper, after analyzing the reasons of poor generalization and overfitting in neural networks, we consider some noise data as a singular value of a continuous function jump discontinuity point. The continuous part can be approximated with the simplest neural networks, which have good generalization performance and optimal network architecture, by traditional algorithms such as constructi...

Journal: :SIAM journal on mathematics of data science 2022

Deep neural networks generalize well despite being exceedingly overparameterized and trained without explicit regularization. This curious phenomenon has inspired extensive research activity in establishing its statistical principles: Under what conditions is it observed? How do these depend on the data training algorithm? When does regularization benefit generalization? While such questions re...

2007
Sam Reid

Several variants of Bayesian Model Averaging (BMA) are described and evaluated on a model library of heterogeneous classifiers, and compared to other classifier combination methods. In particular, embedded cross-validation is investigated as a technique for reducing overfitting in BMA.

2016
Marten Postma Filip Ilievski Marieke van Erp

Entities and events in the world have no frequency, but our communication about them and the expressions we use to refer to them do have a strong frequency profile. Language expressions and their meanings follow a Zipfian distribution, featuring a small amount of very frequent observations and a very long tail of low frequent observations. Since our NLP datasets sample texts but do not sample t...

2018
T. Poggio K. Kawaguchi Q. Liao B. Miranda L. Rosasco

A main puzzle of deep networks revolves around the apparent absence of overfitting intended as robustness of the expected error against overparametrization, despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asympto...

Journal: :IEEE transactions on neural networks 2000
George N. Karystinos Dimitris A. Pados

An algorithmic procedure is developed for the random expansion of a given training set to combat overfitting and improve the generalization ability of backpropagation trained multilayer perceptrons (MLPs). The training set is K-means clustered and locally most entropic colored Gaussian joint input-output probability density function (pdf) estimates are formed per cluster. The number of clusters...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید