Back Propagation Learning Using a Super Stable Motion

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning sets of filters using back-propagation

A learning procedure, called back-propagation, for layered networks of deterministic, neuron-like units has been described previously. The ability of the procedure automatically to discover useful internal representations makes it a powerful tool for attacking difficult problems like speech recognition. This paper describes further research on the learning procedure and presents an example in w...

متن کامل

Using PHiPAC to speed error back-propagation learning

Signal processing algorithms such as neural network learning, convolution, cross-correlation, IIR ltering, etc., can be computationally time-consuming and are often used in time-critical application. This makes it desirable to achieve high e ciency on these routines. Such algorithms are often coded in assembly language to achieve optimal speed, but it is then di cult to make a full exploration ...

متن کامل

A super - stable motion with infinite mean branching ✩

A class of finite measure-valued càdlàg superprocesses X with Neveu’s (1992) continuous-state branching mechanism is constructed. To this end, we start from certain supercritical (α, d,β)-superprocesses X(β) with symmetric α-stable motion and (1+β)-branching and prove convergence on path space as β ↓ 0. The log-Laplace equation related to X has the locally non-Lipschitz function u logu as non-l...

متن کامل

Scaling Relationships in Back-propagation Learning

A bstrac t. We present an empirical st udy of th e required training time for neural networks to learn to compute the parity function using the back -propagation learning algorithm, as a function of t he numb er of inp uts. The parity funct ion is a Boolean predica te whose order is equal to th e number of inpu t s. \Ve find t hat t he t rain ing time behaves roughly as 4" I where n is the num ...

متن کامل

Experiments on Learning by Back Propagation

Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Progress of Theoretical Physics

سال: 1995

ISSN: 0033-068X

DOI: 10.1143/ptp.93.845