نتایج جستجو برای: boltzmann machine

تعداد نتایج: 277539  

2002
Arnaud Berny

Abstract. We propose to apply the Boltzmann machine (BM) to population-based incremental learning (PBIL). We will replace the statistical model used in PBIL, which assumes that the binary variables of the optimisation problem are independent, with that of a BM. From the logarithm of the expectation of the function to maximise, we derive specific learning rules for the BM. These learning rules i...

2012
Patrick Kenny

I. INTRODUCTION Boltzmann machines are probability distributions on high dimensional binary vectors which are analogous to Gaussian Markov Random Fields in that they are fully determined by first and second order moments. A key difference however is that augmenting Boltzmann machines with hidden variables enlarges the class of distributions that can be modeled, so that in principle it is possib...

1996
Lars Kai Hansen Lars Nonboe Andersen Ulrik Kjems Jan Larsen

This contribution concerns a generalization of the Boltzmann Machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization and generalization in the context of Boltzmann Machines. We provide an illustrative example c...

Journal: :Physical review letters 2006
S S Chikatamarla S Ansumali I V Karlin

Efficient, nonlinearly stable entropic lattice Boltzmann models for computational fluid dynamics are presented. A new method of fast evaluation of equilibria to machine precision is proposed. Analytical solution is found for the collision step which guarantees stability and thermodynamic consistency of the scheme. As an example, a novel 15-velocity lattice Boltzmann model is derived and validat...

2010
Hannes Schulz Andreas Müller Sven Behnke

Restricted Boltzmann Machines are increasingly popular tools for unsupervised learning. They are very general, can cope with missing data and are used to pretrain deep learning machines. RBMs learn a generative model of the data distribution. As exact gradient ascent on the data likelihood is infeasible, typically Markov Chain Monte Carlo approximations to the gradient such as Contrastive Diver...

Journal: :Neural Networks 1995
Lucas C. Parra Gustavo Deco

7 Acknowledgments We want to thank specially Stefan Mießbach for numerous contributions in proving the convergence properties of the net dynamic and for advice concerning the " cornered rat " example. We are very grateful to Ingrid Gabler for supplying the experimental data for this same example. 6.3 Convergence of the dynamic We show now the local convergence of the dynamic defined by the MF e...

Journal: :CoRR 2014
Siamak Ravanbakhsh Russell Greiner Brendan J. Frey

A new approach to maximum likelihood learning of discrete graphical models and RBM in particular is introduced. Our method, Perturb and Descend (PD) is inspired by two ideas (I) perturb and MAP method for sampling (II) learning by Contrastive Divergence minimization. In contrast to perturb and MAP, PD leverages training data to learn the models that do not allow efficient MAP estimation. During...

2016
Hao Wang Dejing Dou Daniel Lowd

Deep neural networks are known for their capabilities for automatic feature learning from data. For this reason, previous research has tended to interpret deep learning techniques as data-driven methods, while few advances have been made from knowledge-driven perspectives. We propose to design a semantic rich deep learning model from a knowledge driven perspective, by introducing formal semanti...

Journal: :Neural computation 2006
Aapo Hyvärinen

A Boltzmann machine is a classic model of neural computation, and a number of methods have been proposed for its estimation. Most methods are plagued by either very slow convergence or asymptotic bias in the resulting estimates. Here we consider estimation in the basic case of fully visible Boltzmann machines. We show that the old principle of pseudolikelihood estimation provides an estimator t...

1998
M A R Leisink H J Kappen

Boltzmann machines are able to represent some probability distribution but the exact learning algorithm needs a time that is exponential in the number of neurons. The approximation method called Linear Response is not only applicable to machines with only second order interactions, but can be extended to Boltzmann machine with third and higher order interactions. It is shown that this can be us...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید