نتایج جستجو برای: recurrent neural net

تعداد نتایج: 508769  

Journal: :IEEE Transactions on Computational Imaging 2022

Existing deep compressive sensing (CS) methods either ignore adaptive online optimization or depend on costly iterative optimizer during reconstruction. This work explores a novel image CS framework with recurrent-residual structural constraint, termed as $\mathrm{R}^{2}$CS-NET. The notation="LaTeX">$\mathrm{R}^{2}$</tex-mat...

2010
Martin Wöllmer Yang Sun Florian Eyben Björn W. Schuller

In this paper we introduce a novel hybrid model architecture for speech recognition and investigate its noise robustness on the Aurora 2 database. Our model is composed of a bidirectional Long Short-Term Memory (BLSTM) recurrent neural net exploiting long-range context information for phoneme prediction and a Dynamic Bayesian Network (DBN) for decoding. The DBN is able to learn pronunciation va...

Journal: :CoRR 2016
Gang Chen

We describe recurrent neural networks (RNNs), which have attracted great attention on sequential tasks, such as handwriting recognition, speech recognition and image to text. However, compared to general feedforward neural networks, RNNs have feedback loops, which makes it a little hard to understand the backpropagation step. Thus, we focus on basics, especially the error backpropagation to com...

2017
Prasad Kawthekar Raunaq Rewari Suvrat Bhooshan

Generating human quality text is a challenging problem because of ambiguity of meaning and difficulty in modeling long term semantic connections. Recurrent Neural Networks (RNNs) have shown promising results in this problem domain, with the most common approach to its training being to maximize the log predictive likelihood of each true token in the training sequence given the previously observ...

2004
Alex Graves Nicole Beringer Jürgen Schmidhuber

In this paper we demonstrate that Long Short-Term Memory (LSTM) is a differentiable recurrent neural net (RNN) capable of robustly categorizing timewarped speech data. We measure its performance on a spoken digit identification task, where the data was spike-encoded in such a way that classifying the utterances became a difficult challenge in non-linear timewarping. We find that LSTM gives grea...

2010
André Frank Krause Volker Dürr Bettina Bläsing Thomas Schack

Echo State Networks are a special class of recurrent neural networks, that are well-suited for attractorbased learning of motor patterns. Using structural multiobjective optimization, the trade-off between network size and accuracy can be identified. This allows to choose a feasible model capacity for a follow-up full-weight optimization. It is shown to produce small and efficient networks, tha...

2016
Oğuz H. Elibol Milad Gholami

We investigate learning language models of individual movie characters. We train a recurrent neural net based model on a large dataset of movie scripts with no character specificity to learn a general dialogue model first. Then, we transfer the parameters from this pretrained model to initialize another model and learn a character specific model from a single show. We measure the performance by...

Journal: :CoRR 2017
Chun-Hao Chang Ladislav Rampásek Anna Goldenberg

Deep neural networks are a promising technology achieving state-of-the-art results in biological and healthcare domains. Unfortunately, DNNs are notorious for their non-interpretability. Clinicians are averse to black boxes and thus interpretability is paramount to broadly adopting this technology. We aim to close this gap by proposing a new general feature ranking method for deep learning. We ...

2012
Hans-Georg Zimmermann Christoph Tietz Ralph Grothmann

Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a...

2014
Caspar Addyman Denis Mareschal

The neural network version of the Gaussian Activation Model of Interval Timing (GAMIT-Net) is a simple recurrent network that unifies retrospective and prospective timing in a single framework. It has two parts. Firstly, a time-dependent signal is generated by a spreading Gaussian activation. Next, a simple recurrent network (SRN) combines information from the Gaussian and its own internal stat...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید