نتایج جستجو برای: training iteration

تعداد نتایج: 358779  

Journal: :J. Electrical and Computer Engineering 2008
Imed Hadj-Kacem Noura Sellami Inbar Fijalkow Aline Roumy

We consider the problem of optimization of the training sequence length when a turbo-detector composed of a maximum a posteriori (MAP) equalizer and a MAP decoder is used. At each iteration of the receiver, the channel is estimated using the hard decisions on the transmitted symbols at the output of the decoder. The optimal length of the training sequence is found by maximizing an effective sig...

2011
Todd W. Neller Steven Hnath

Using the bluffing dice game Dudo as a challenge domain, we abstract information sets using imperfect recall of actions. Even with such abstraction, the standard Counterfactual Regret Minimization (CFR) algorithm proves impractical for Dudo, with the number of recursive visits to the same abstracted information sets increasing exponentially with the depth of the game graph. By holding strategie...

2011
Kazuo SATO Makoto TADENUMA

Sensory tested data of sake were analyzed by a back-propagation method of a neural network, where the input data were trained for the output of sake-categories. To optimize the training condition, effects of the numbers of training iteration and the unit of hidden layer were examined. The accuracy of discrimination by the neural network was better than that by the discriminant analysis under th...

2007
Tong Srikhacha Phayung Meesad

* Department of Information Technology, Faculty of Information Technology, KMITNB ** Department of Teacher Training in Electrical Engineering, Faculty of Technical Education, KMITNB ABSTRACT In general case, stock pricing pattern is similar to a noisy pattern with a slow changing curve. The global prediction techniques such as support vector (SV) show good enveloped prediction patterns but they...

2006
HONGMEI SHAO WEI WU LIJUN LIU

Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error ...

2010
Jason T. Rolfe Matthew Cook

Factor graphs allow large probability distributions to be stored efficiently and facilitate fast computation of marginal probabilities, but the difficulty of training them has limited their use. Given a large set of data points, the training process should yield factors for which the observed data has a high likelihood. We present a factor graph learning algorithm which on each iteration merges...

Journal: :CoRR 2015
Sebastian Urban Patrick van der Smagt

Existing approaches to combine both additive and multiplicative neural units either use a fixed assignment of operations or require discrete optimization to determine what function a neuron should perform. This leads either to an inefficient distribution of computational resources or an extensive increase in the computational complexity of the training procedure. We present a novel, parameteriz...

2007
Vikas C. Raykar Ramani Duraiswami Balaji Krishnapuram

We consider the problem of learning the ranking function that maximizes a generalization of the Wilcoxon-Mann-Whitney statistic on training data. Relying on an 2-exact approximation for the error-function, we reduce the computational complexity of each iteration of a conjugate gradient algorithm for learning ranking functions from O(m2), to O(m), where m is the size of the training data. Experi...

2006
Wei Wu Hongmei Shao Zhengxue Li

Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the monotonicity of the error function with...

2008
Jack Breese

In modern computing, there are several approaches to pattern recognition and object classification. As computational power has increased, artificial neural networks have become ever more popular and prevalent in this regard. Over the course of the year, I implemented a general-purpose library for artificial neural networks in the C programming language. It includes a variety of functions and da...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید