نتایج جستجو برای: deep seq2seq network

تعداد نتایج: 847003  

2017
Zhirui Zhang Shujie Liu Mu Li Ming Zhou Enhong Chen

Although sequence-to-sequence (seq2seq) network has achieved significant success in many NLP tasks such as machine translation and text summarization, simply applying this approach to transition-based dependency parsing cannot yield a comparable performance gain as in other stateof-the-art methods, such as stack-LSTM and head selection. In this paper, we propose a stack-based multi-layer attent...

Journal: :IEEE Transactions on Parallel and Distributed Systems 2021

Multi-access edge computing (MEC) aims to extend cloud service the network reduce traffic and latency. A fundamental problem in MEC is how efficiently offload heterogeneous tasks of mobile applications from user equipment (UE) hosts. Recently, many deep reinforcement learning (DRL) based methods have been proposed learn offloading policies through interacting with environment that consists UE, ...

Journal: :CoRR 2017
Sajal Choudhary Prerna Srivastava Lyle H. Ungar João Sedoc

We investigate the task of building a domain aware chat system which generates intelligent responses in a conversation comprising of different domains. The domain in this case is the topic or theme of the conversation. To achieve this, we present DOM-Seq2Seq, a domain aware neural network model based on the novel technique of using domain-targeted sequence-to-sequence models (Sutskever et al., ...

Journal: :CoRR 2016
Adam James Summerville James Owen Ryan Michael Mateas Noah Wardrip-Fruin

In this paper, we present a novel approach to natural language understanding that utilizes context-free grammars (CFGs) in conjunction with sequence-to-sequence (seq2seq) deep learning. Specifically, we take a CFG authored to generate dialogue for our target application for NLU, a videogame, and train a long short-term memory (LSTM) recurrent neural network (RNN) to map the surface utterances t...

Journal: :CoRR 2017
Dan Lim

Traditional approach in artificial intelligence (AI) have been solving the problem that is difficult for human but relatively easy for computer if it could be formulated as mathematical rules or formal languages. However, their symbol, rule-based approach failed in the problem where human being solves intuitively like image recognition, natural language understanding and speech recognition. The...

2018
Minhao Cheng Jinfeng Yi Huan Zhang Pin-Yu Chen Cho-Jui Hsieh

Crafting adversarial examples has become an important technique to evaluate the robustness of deep neural networks (DNNs). However, most existing works focus on attacking the image classification problem since its input space is continuous and output space is finite. In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) mod...

2017
Neha Nayak Dilek Z. Hakkani-Tür Marilyn A. Walker Larry P. Heck

Natural language generation for task-oriented dialogue systems aims to effectively realize system dialogue actions. All natural language generators (NLGs) must realize grammatical, natural and appropriate output, but in addition, generators for taskoriented dialogue must faithfully perform a specific dialogue act that conveys specific semantic information, as dictated by the dialogue policy of ...

2017
Zhen Xu Bingquan Liu Baoxun Wang Chengjie Sun Xiaolong Wang Zhuoran Wang Chao Qi

This paper presents a Generative Adversarial Network (GAN) to model singleturn short-text conversations, which trains a sequence-to-sequence (Seq2Seq) network for response generation simultaneously with a discriminative classifier that measures the differences between human-produced responses and machinegenerated ones. In addition, the proposed method introduces an approximate embedding layer t...

2016
James Owen Ryan Adam James Summerville Michael Mateas Noah Wardrip-Fruin

In this paper, we present a novel approach to natural language understanding that utilizes context-free grammars (CFGs) in conjunction with sequence-to-sequence (seq2seq) deep learning. Specifically, we take a CFG authored to generate dialogue for our target application, a videogame, and train a long short-term memory (LSTM) recurrent neural network (RNN) to translate the surface utterances tha...

Journal: :CoRR 2016
Iulian Serban Ryan Lowe Laurent Charlin Joelle Pineau

Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید