نتایج جستجو برای: training iteration

تعداد نتایج: 358779  

Journal: :IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society 2001
Chi-Hsu Wang Han-Leih Liu Chin-Teng Lin

The stability analysis of the learning rate for a two-layer neural network (NN) is discussed first by minimizing the total squared error between the actual and desired outputs for a set of training vectors. The stable and optimal learning rate, in the sense of maximum error reduction, for each iteration in the training (back propagation) process can therefore be found for this two-layer NN. It ...

2014
Jendrik Seipp Silvan Sievers Frank Hutter

Cedalion is our algorithm for automatically configuring sequential planning portfolios. Given a parametrized planner and a set of training instances, it iteratively selects the pair of planner configuration and time slice that improves the current portfolio the most per time spent. At the end of each iteration all instances for which the current portfolio finds the best solution are removed fro...

A. Armand, Z. Gouyandeh

This paper presents a comparison between variational iteration method (VIM) and modfied variational iteration method (MVIM) for approximate solution a system of Volterra integral equation of the first kind. We convert a system of Volterra integral equations to a system of Volterra integro-di®erential equations that use VIM and MVIM to approximate solution of this system and hence obtain an appr...

Sh. S. Behzadi, T. Allahviranloo,

In this paper, the Kadomtsev-Petviashvili equation is solved by using the Adomian’s decomposition method , modified Adomian’s decomposition method , variational iteration method , modified variational iteration method, homotopy perturbation method, modified homotopy perturbation method and homotopy analysis method. The existence and uniqueness of the solution and convergence of the proposed...

2013
Roi Livni Shai Shalev-Shwartz Ohad Shamir

We consider deep neural networks, in which the output of each node is a quadratic function of its inputs. Similar to other deep architectures, these networks can compactly represent any function on a finite training set. The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the Basis Learner. The algorithm is a univ...

2010
Rita Singh Benjamin Lambert Bhiksha Raj

In unsupervised training of ASR systems, no annotated data are assumed to exist. Word-level annotations for training audio are generated iteratively using an ASR system. At each iteration a subset of data judged as having the most reliable transcriptions is selected to train the next set of acoustic models. Data selection however remains a difficult problem, particularly when the error rate of ...

2016
Gavin Taylor Ryan Burmeister Zheng Xu Bharat Singh Ankit Patel Tom Goldstein

With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks. This is largely because conventional optimization algorithms rely on stochastic gradient methods that don’t scale well to large numbers of cores in a cluster setting. Furthermore, the convergence of all gradient methods, including batch methods, s...

Journal: :CoRR 2016
Anna Khoreva Rodrigo Benenson Jan Hendrik Hosang Matthias Hein Bernt Schiele

Semantic labelling and instance segmentation are two tasks that require particularly costly annotations. Starting from weak supervision in the form of bounding box detection annotations, we propose to recursively train a convnet such that outputs are improved after each iteration. We explore which aspects affect the recursive training, and which is the most suitable box-guided segmentation to u...

2012
Hartmut Neven Vasil S. Denchev Geordie Rose William G. Macready

We introduce a novel discrete optimization method for training in the context of a boosting framework for large scale binary classifiers. The motivation is to cast the training problem into the format required by existing adiabatic quantum hardware. First we provide theoretical arguments concerning the transformation of an originally continuous optimization problem into one with discrete variab...

Journal: :CoRR 2017
Zhiming Zhou Shu Rong Han Cai Weinan Zhang Yong Yu Jun Wang

In this paper, we study the impact and role of multi-class labels on adversarial training for generative adversarial nets (GANs). Our derivation of the gradient shows that the current GAN model with labeled data still results in undesirable properties due to the overlay of the gradients from multiple classes. We thus argue that a better gradient should follow the intensity and direction that ma...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید