نتایج جستجو برای: recurrent neural network

تعداد نتایج: 942527  

2007
Anton Maximilian Schäfer Steffen Udluft Hans-Georg Zimmermann

This paper presents our Recurrent Control Neural Network (RCNN), which is a model-based approach for a data-efficient modelling and control of reinforcement learning problems in discrete time. Its architecture is based on a recurrent neural network (RNN), which is extended by an additional control network. The latter has the particular task to learn the optimal policy. This method has the advan...

Journal: :Neural Parallel & Scientific Comp. 1999
Gürsel Serpen

A procedure that defines values of constraint weight parameters of single-layer relaxation-type recurrent neural networks for establishing stability of all solutions for an optimization problem is introduced. Application to the Traveling Salesman optimization problem, using the discrete dynamics Hopfield network as the recurrent neural network algorithm, is shown to illustrate the procedure. Si...

2016
Yuan Gao Dorota Glowacka

This paper explores the possibility of using multiplicative gate to build two recurrent neural network structures. These two structures are called Deep Simple Gated Unit (DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), both structures require fewer parameters and le...

2017
Viacheslav Khomenko Oleg Shyshkov Olga Radyvonenko Kostiantyn Bokhan

An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucket...

2014
Martin Sundermeyer Tamer Alkhouli Joern Wuebker Hermann Ney

This work presents two different translation models using recurrent neural networks. The first one is a word-based approach using word alignments. Second, we present phrase-based translation models that are more consistent with phrasebased decoding. Moreover, we introduce bidirectional recurrent neural models to the problem of machine translation, allowing us to use the full source sentence in ...

2011
Hans-Georg Zimmermann Alexey Minin Victoria Kusherbaeva

Recurrent Neural Networks are in the scope of the machine learning community for many years. In the current paper we discuss the Historical Consistent Recurrent Neural Network and its extension to the complex valued case. We give some insights into complex valued back propagation and its application to the complex valued recurrent neural network training. Finally we present the results for the ...

2018
Yuanhang Su Yuzhong Huang C.-C. Jay Kuo

In this work, we investigate the memory capability of recurrent neural networks (RNNs), where this capability is defined as a function that maps an element in a sequence to the current output. We first analyze the system function of a recurrent neural network (RNN) cell, and provide analytical results for three RNNs. They are the simple recurrent neural network (SRN), the long short-term memory...

2016
Markus Nußbaum-Thom Jia Cui Bhuvana Ramabhadran Vaibhava Goel

Convolutional and bidirectional recurrent neural networks have achieved considerable performance gains as acoustic models in automatic speech recognition in recent years. Latest architectures unify long short-term memory, gated recurrent unit and convolutional neural networks by stacking these different neural network types on each other, and providing short and long-term features to different ...

Journal: :iranian journal of chemistry and chemical engineering (ijcce) 2009
rabeheh bahreini ramin bozorgmehry boozarjomehry

an adaptive input-output linearization method for general nonlinear systems is developed without using states of the system. another key feature of this structure is the fact that, it does not need model of the system. in this scheme, neurolinearizer has few weights, so it is practical in adaptive situations.  online training of neurolinearizer is compared to model predictive recurrent training...

2000
Yi Zhang Pheng-Ann Heng Ping-Fu Fung

This paper proposes a discrete recurrent neural network model to implement winner-take-all function. This network model has simple organizations and clear dynamic behaviours. The dynamic properties of the proposed winner-take-all networks are studied in detail. Simulation results are given to show network performance. Since the network model is formulated as discrete time systems , it has advan...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید