نتایج جستجو برای: minimal learning parameters algorithm

تعداد نتایج: 1881512  

Journal: :Soft Comput. 2012
Jun-Hai Zhai Hong-Yu Xu Xizhao Wang

Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitti...

Journal: :Evolving Systems 2010
José de Jesús Rubio Diana M. Vázquez Jaime Pacheco

In this paper, an stable backpropagation algorithm is used to train an online evolving radial basis function neural network. Structure and parameters learning are updated at the same time in our algorithm, we do not make di¤erence in structure learning and parameters learning. It generate groups with an online clustering. The center is updated to achieve the center is near to the incoming data ...

Journal: :Journal of Automata, Languages and Combinatorics 2007
Frank Drewes Heiko Vogler

We devise a learning algorithm for deterministically recognizable tree series where the weights are taken from a commutative group. For this, we use an adaptation of the minimal adequate teacher model that was originally introduced by Angluin. The algorithm runs in polynomial time and constructs the unique minimal deterministic bottom-up finite state weighted tree automaton that recognizes the ...

1994
Takeshi Shinohara Setsuko Otsuki Hiroki Ishizaka

In this chapter, we present a polynomial time algorithm, called a k-minimal multiple generalization (k-mmg) algorithm, where k 1, and its application to inductive learning problems. The algorithm is a natural extension of the least general generalization algorithm developed by Plotkin and Reynolds. Given a nite set of ground rst-order terms, the k-mmg algorithm generalizes the examples by at mo...

In this paper, a new algorithm which is the result of the combination of cellular learning automata and frog leap algorithm (SFLA) is proposed for optimization in continuous, static environments.At the proposed algorithm, each memeplex of frogs is placed in a cell of cellular learning automata. Learning automata in each cell acts as the brain of memeplex, and will determine the strategy of moti...

Journal: :CoRR 2013
Kyongche Kang Jack Michalak

Machine Learning focuses on the construction and study of systems that can learn from data. This is connected with the classification problem, which usually is what Machine Learning algorithms are designed to solve. When a machine learning method is used by people with no special expertise in machine learning, it is important that the method be ‘robust’ in classification, in the sense that reas...

Journal: :iranian journal of numerical analysis and optimization 0
maryam mojarrab faezeh toutounian

lsmr (least squares minimal residual) is an iterative method for the solution of the linear system of equations and leastsquares problems. this paper presents a block version of the lsmr algorithm for solving linear systems with multiple right-hand sides. the new algorithm is based on the block bidiagonalization and derived by minimizing the frobenius norm of the resid ual matrix of normal equa...

2015
Francesco Kriegel

Formal Concept Analysis and its methods for computing minimal implicational bases have been successfully applied to axiomatise minimal ELTBoxes from models, so called bases of GCIs. However, no technique for an adjustment of an existing EL-TBox w.r.t. a new model is available, i.e., on a model change the complete TBox has to be recomputed. This document proposes a method for the computation of ...

Rapid prototyping (RP) methods are used for production easily and quickly of a scale model of a physical part or assembly. Gas metal arc welding (GMAW) is a widespread process used for rapid prototyping of metallic parts. In this process, in order to obtain a desired welding geometry, it is very important to predict the weld bead geometry based on the input process parameters, which are voltage...

2010
Hannes Schulz Andreas Müller Sven Behnke

Restricted Boltzmann Machines are increasingly popular tools for unsupervised learning. They are very general, can cope with missing data and are used to pretrain deep learning machines. RBMs learn a generative model of the data distribution. As exact gradient ascent on the data likelihood is infeasible, typically Markov Chain Monte Carlo approximations to the gradient such as Contrastive Diver...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید