نتایج جستجو برای: learning networks

تعداد نتایج: 976319  

1998
Lipo Wang

In this paper, we show that noise injection into inputs in unsupervised learning neural networks does not improve their performance as it does in supervised learning neural networks. Specifically, we show that training noise degrades the classification ability of a sparsely connected version of the Hopfield neural network, whereas the performance of a sparsely connected winner-take-all neural n...

Journal: :CoRR 2015
Michael Cogswell Faruk Ahmed Ross B. Girshick C. Lawrence Zitnick Dhruv Batra

One major challenge in training Deep Neural Networks is preventing overfitting. Many techniques such as data augmentation and novel regularizers such as Dropout have been proposed to prevent overfitting without requiring a massive amount of training data. In this work, we propose a new regularizer called DeCov which leads to significantly reduced overfitting (as indicated by the difference betw...

2016
Shengjie Wang Abdel-rahman Mohamed Rich Caruana Jeff A. Bilmes Matthai Philipose Matthew Richardson Krzysztof Geras Gregor Urban Özlem Aslan

Deep neural networks have achieved great success on a variety of machine learning tasks. There are many fundamental and open questions yet to be answered, however. We introduce the Extended Data Jacobian Matrix (EDJM) as an architecture-independent tool to analyze neural networks at the manifold of interest. The spectrum of the EDJM is found to be highly correlated with the complexity of the le...

Journal: :Proceedings of the Association for Information Science and Technology 2015

Journal: :IEEJ Transactions on Electronics, Information and Systems 2004

Journal: :The International Review of Research in Open and Distributed Learning 2016

2005
Gilles Hermann Patrice Wira Jean-Philippe Urban

This chapter explores modular learning in artificial neural networks for intelligent robotics. Mainly inspired from neurobiological aspects, the modularity concept can be used to design artificial neural networks. The main theme of this chapter is to explore the organization, the complexity and the learning of modular artificial neural networks. A robust modular neural architecture is then deve...

1995
David Heckerman

Whereas acausal Bayesian networks represent probabilistic independence, causal Bayesian networks represent causal relationships. In this paper, we examine Bayesian methods for learning both types of networks. Bayesian methods for learning acausal networks are fairly well developed. These methods often employ assumptions to facilitate the construction of priors, including the assumptions of para...

1990
Joshua Alspector Robert B. Allen Anthony Jayakumar Torsten Zeppenfeld Ron Meir

Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation tha...

2018
Quanshi Zhang Song-Chun Zhu

This paper reviews recent studies in emerging directions of understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, the interpretability is always an Achilles' heel of deep neural networks. At present, deep neural networks obtain a h...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید