Recursive tree grammar autoencoders
نویسندگان
چکیده
Abstract Machine learning on trees has been mostly focused as input. Much less research investigated output, which many applications, such molecule optimization for drug discovery, or hint generation intelligent tutoring systems. In this work, we propose a novel autoencoder approach, called recursive tree grammar (RTG-AE), encodes via bottom-up parser and decodes grammar, both learned neural networks that minimize the variational loss. The resulting encoder decoder can then be utilized in subsequent tasks, time series prediction. RTG-AEs are first model to combine three features: processing, grammatical knowledge, deep learning. Our key message is unique combination of all features outperforms models any two three. Experimentally, show RTG-AE improves autoencoding error, training time, score synthetic well real datasets compared four baselines. We further prove parse generate linear expressive enough handle regular grammars.
منابع مشابه
Learning Meanings for Sentences with Recursive Autoencoders
In this report, we learn a model to predict sentiments for sentences by SemiSupervised Recursive Autoencoders (RAE) and reproduce the result in [1]. We use greedy algorithm to construct the tree structure in neural networks and forward and backward propagation to compute the gradients of weights. Using meaning vectors of length 20, we achieve 75.4% accuracy in the movie reviews (MR) dataset, wh...
متن کاملRecursive Autoencoders for ITG-Based Translation
While inversion transduction grammar (ITG) is well suited for modeling ordering shifts between languages, how to make applying the two reordering rules (i.e., straight and inverted) dependent on actual blocks being merged remains a challenge. Unlike previous work that only uses boundary words, we propose to use recursive autoencoders to make full use of the entire merging blocks alternatively. ...
متن کاملLearning Meanings for Sentences with Recursive Autoencoders
The objective of this project is to implement the recursive auto encoder (RAE) method to learn a model to predict sentiments for sentences and reproduce the result in [1]. To learn the weights for recursive functions, we implement forward and backward propagation algorithms. We validate the gradient computed from forward and backward algorithm by comparing it to the gradient computed from numer...
متن کاملSentence Alignment using Unfolding Recursive Autoencoders
In this paper, we propose a novel two step algorithm for sentence alignment in monolingual corpora using Unfolding Recursive Autoencoders. First, we use unfolding recursive auto-encoders (RAE) to learn feature vectors for phrases in syntactical tree of the sentence. To compare two sentences we use a similarity matrix which has dimensions proportional to the size of the two sentences. Since the ...
متن کاملSemi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions
We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined senti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2022
ISSN: ['0885-6125', '1573-0565']
DOI: https://doi.org/10.1007/s10994-022-06223-7