نتایج جستجو برای: lstm

تعداد نتایج: 6907  

2017
Zhiyong Cui Ruimin Ke Yinhai Wang

Short-term traffic forecasting based on deep learning methods, especially long-term short memory (LSTM) neural networks, received much attention in recent years. However, the potential of deep learning methods is far from being fully exploited in terms of the depth of the architecture, the spatial scale of the prediction area, and the prediction power of spatial-temporal data. In this paper, a ...

Journal: :CoRR 2017
Franyell Silfa Gem Dot Jose-Maria Arnau Antonio González

Recurrent Neural Networks (RNNs) are a key technology for emerging applications such as automatic speech recognition, machine translation or image description. Long Short Term Memory (LSTM) networks are the most successful RNN implementation, as they can learn long term dependencies to achieve high accuracy. Unfortunately, the recurrent nature of LSTM networks significantly constrains the amoun...

Journal: :CoRR 2017
Xiaofeng Xie Di Wu Siping Liu Renfa Li

Xiaofeng Xie, Di Wu, Siping Liu, Renfa Li Abstract: Deep learning is a popular machine learning approach which has achieved a lot of progress in all traditional machine learning areas. Internet of thing (IoT) and Smart City deployments are generating large amounts of time-series sensor data in need of analysis. Applying deep learning to these domains has been an important topic of research. The...

Journal: :CoRR 2017
Xu Tian Jun Zhang Zejun Ma Yi He Juan Wei Peihao Wu Wenchang Situ Shuai Li Yang Zhang

Recurrent neural networks (RNNs), especially long shortterm memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their impressive learning ability. However, it is more difficult to train a deeper network. We introduce a training framework with layer-wise training and e...

2016
Yan Huang Yongqiang Wang Yifan Gong

We studied the semi-supervised training in a fully connected deep neural network (DNN), unfolded recurrent neural network (RNN), and long short-term memory recurrent neural network (LSTM-RNN) with respect to the transcription quality, the importance data sampling, and the training data amount. We found that DNN, unfolded RNN, and LSTM-RNN are increasingly more sensitive to labeling errors. For ...

2017
Zengjian Liu Ming Yang Xiaolong Wang Qingcai Chen Buzhou Tang Zhe Wang Hua Xu

BACKGROUND Entity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a ...

2001
Bram Bakker

This paper presents reinforcement learning with a Long ShortTerm Memory recurrent neural network: RL-LSTM. Model-free RL-LSTM using Advantage( ) learning and directed exploration can solve non-Markovian tasks with long-term dependencies between relevant events. This is demonstrated in a T-maze task, as well as in a di cult variation of the pole balancing task.

2016
Maohua Zhu Yuan Xie Minsoo Rhu Jason Clemons Stephen W. Keckler

Prior work has demonstrated that exploiting the sparsity can dramatically improve the energy efficiency and reduce the memory footprint of Convolutional Neural Networks (CNNs). However, these sparsity-centric optimization techniques might be less effective for Long Short-Term Memory (LSTM) based Recurrent Neural Networks (RNNs), especially for the training phase, because of the significant stru...

2017
Yue Wu Tianxing He Zhehuai Chen Yanmin Qian Kai Yu

Recently long short-term memory language model (LSTM LM) has received tremendous interests from both language and speech communities, due to its superiorty on modelling long-term dependency. Moreover, integrating auxiliary information, such as context feature, into the LSTM LM has shown improved performance in perplexity (PPL). However, improper feed of auxiliary information won’t give consiste...

Journal: :CoRR 2017
Brendan Maginnis Pierre H. Richemond

Recurrent Neural Networks architectures excel at processing sequences by modelling dependencies over different timescales. The recently introduced Recurrent Weighted Average (RWA) unit captures long term dependencies far better than an LSTM on several challenging tasks. The RWA achieves this by applying attention to each input and computing a weighted average over the full history of its comput...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید