نتایج جستجو برای: lstm

تعداد نتایج: 6907  

2017
Lahiru Samarakoon Brian Kan-Wing Mak Khe Chai Sim

Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vector...

2017
Ling Gan Houyu Gong

Tree-structured Long Short-Term Memory (Tree-LSTM) has been proved to be an effective method in the sentiment analysis task. It extracts structural information on text, and uses Long Short-TermMemory (LSTM) cell to prevent gradient vanish. However, though combining the LSTM cell, it is still a kind of model that extracts the structural information and almost not extracts serialization informati...

Journal: :The Journal of the Acoustical Society of America 2016
Jitong Chen DeLiang Wang

Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. T...

2017
Hang Yuan You Zhang Jin Wang Xuejie Zhang

A shared task is a typical question answering task that aims to test how accurately the participants can answer the questions in exams. Typically, for each question, there are four candidate answers, and only one of the answers is correct. The existing methods for such a task usually implement a recurrent neural network (RNN) or long short-term memory (LSTM). However, both RNN and LSTM are bias...

2016
Peter Potash William Boag Alexey Romanov Vasili Ramanishka Anna Rumshisky

This paper describes the SimiHawk system submission from UMass Lowell for the core Semantic Textual Similarity task at SemEval2016. We built four systems: a small featurebased system that leverages word alignment and machine translation quality evaluation metrics, two end-to-end LSTM-based systems, and an ensemble system. The LSTMbased systems used either a simple LSTM architecture or a Tree-LS...

2010
Martin Wöllmer Florian Eyben Björn W. Schuller Gerhard Rigoll

We present a novel continuous speech recognition framework designed to unite the principles of triphone and Long ShortTerm Memory (LSTM) modeling. The LSTM principle allows a recurrent neural network to store and to retrieve information over long time periods, which was shown to be well-suited for the modeling of co-articulation effects in human speech. Our system uses a bidirectional LSTM netw...

2015
Rafal Józefowicz Wojciech Zaremba Ilya Sutskever

The Recurrent Neural Network (RNN) is an extremely powerful sequence model that is often difficult to train. The Long Short-Term Memory (LSTM) is a specific RNN architecture whose design makes it much easier to train. While wildly successful in practice, the LSTM’s architecture appears to be ad-hoc so it is not clear if it is optimal, and the significance of its individual components is unclear...

Journal: :Journal of Machine Learning Research 2015
Felix Weninger Johannes Bergmann Björn W. Schuller

In this article, we introduce CURRENNT, an open-source parallel implementation of deep recurrent neural networks (RNNs) supporting graphics processing units (GPUs) through NVIDIA’s Computed Unified Device Architecture (CUDA). CURRENNT supports uniand bidirectional RNNs with Long Short-Term Memory (LSTM) memory cells which overcome the vanishing gradient problem. To our knowledge, CURRENNT is th...

2002
Felix A. Gers Juan Antonio Pérez-Ortiz Douglas Eck Jürgen Schmidhuber

2016
Chihiro Shibata Jeffrey Heinz

Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM archit...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید