Efficient continual learning in neural networks with embedding regularization
نویسندگان
چکیده
منابع مشابه
Continual Robot Learning with Constructive Neural Networks
In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incre-mental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-...
متن کاملLearning Compact Neural Networks with Regularization
We study the impact of regularization for learning neural networks. Our goal is speeding up training, improving generalization performance, and training compact models that are cost efficient. Our results apply to weight-sharing (e.g. convolutional), sparsity (i.e. pruning), and low-rank constraints among others. We first introduce covering dimension of the constraint set and provide a Rademach...
متن کاملContinual Lifelong Learning with Neural Networks: A Review
Humans and animals have the ability to continually acquire and fine-tune knowledge throughout their lifespan. This ability is mediated by a rich set of neurocognitive functions that together contribute to the early development and experiencedriven specialization of our sensorimotor skills. Consequently, the ability to learn from continuous streams of information is crucial for computational lea...
متن کاملRegularization Learning of Neural Networks for Generalization
In this paper, we propose a learning method of neural networks based on the regularization method and analyze its generalization capability. In learning from examples, training samples are independently drawn from some unknown probability distribution. The goal of learning is minimizing the expected risk for future test samples, which are also drawn from the same distribution. The problem can b...
متن کاملLearning Sparse Neural Networks through L0 Regularization
We propose a practical method for L0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurocomputing
سال: 2020
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2020.01.093