Sentence-Wise Smooth Regularization for Sequence to Sequence Learning
نویسندگان
چکیده
منابع مشابه
A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification
Sentence simplification reduces semantic complexity to benefit people with language impairments. Previous simplification studies on the sentence level and word level have achieved promising results but also meet great challenges. For sentencelevel studies, sentences after simplification are fluent but sometimes are not really simplified. For word-level studies, words are simplified but also hav...
متن کاملSmooth Imitation Learning for Online Sequence Prediction
We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to behavior is often complex, we take a learning reduction approach to reduce smooth imitation learning to a re...
متن کاملSmooth Imitation Learning for Online Sequence Prediction
Lemma Statement. (Lemma 5.2) Given any starting state s0, sequentially execute πdet and πsto to obtain two separate trajectories A “ tatut“1 and à “ tãtut“1 such that at “ πdetpstq and ãt “ πstops̃tq, where st “ rxt, at ́1s and s̃t “ rxt, ãt ́1s. Assuming the policies are stable as per Condition 1, we have EÃrãts “ at @t “ 1, . . . , T , where the expectation is taken over all random roll-outs of π...
متن کاملSequence to Sequence Learning for Event Prediction
This paper presents an approach to the task of predicting an event description from a preceding sentence in a text. Our approach explores sequence-to-sequence learning using a bidirectional multi-layer recurrent neural network. Our approach substantially outperforms previous work in terms of the BLEU score on two datasets derived from WIKIHOW and DESCRIPT respectively. Since the BLEU score is n...
متن کاملUnsupervised Pretraining for Sequence to Sequence Learning
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summariz...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2019
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v33i01.33016449