× خانه ژورنال ها پست ها ثبت نام ورود

نتایج جستجو برای dropout rate

تعداد نتایج: 751326  
2013
Pierre Baldi, Peter J. Sadowski,

Dropout is a relatively new algorithm for training neural networks which relies on stochastically “dropping out” neurons during training in order to avoid the co-adaptation of feature detectors. We introduce a general formalism for studying dropout on either units or connections, with arbitrary probability values, and use it to analyze the averaging and regularizing properties of dropout in bot...

2017
Yarin Gal, Jiri Hron, Alex Kendall,

• Gal and Gharamani (2015) reinterpreted dropout regularisation as approximate inference in BNNs •Dropout probabilities pl are variational parameters of the approximate posterior qθ(ω) = ∏ k qMk,pk(Wk), where Wk = Mk · diag (zk) and zkl iid ∼Bernoulli(1− pk) • Concrete distribution (Maddison et al., Jang et al.) relaxes Categorical distribution to obtain gradients wrt the probability vector – E...

Journal: :CoRR 2017
Konrad Zolna, Devansh Arpit, Dendi Suhubdy, Yoshua Bengio,

Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that t...

2016
Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder,

Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for t...

Journal: :CoRR 2016
Suraj Srinivas, R. Venkatesh Babu,

Deep Neural Networks often require good regularizers to generalize well. Dropout is one such regularizer that is widely used among Deep Learning practitioners. Recent work has shown that Dropout can also be viewed as performing Approximate Bayesian Inference over the network parameters. In this work, we generalize this notion and introduce a rich family of regularizers which we call Generalized...

Journal: :Obesity facts 2012
Pernilla Danielsson, Viktoria Svensson, Jan Kowalski, Gisela Nyberg, Orjan Ekblom, Claude Marcus,

OBJECTIVE To assess whether first year weight loss, age, and socioeconomic background correlate with the success rate of continuous long-term behavioral obesity treatment. METHODS In a 3-year longitudinal study, obese children (n = 684) were divided into three groups based on age at the start of treatment, age 6-9 years, 10-13 years, and 14-16 years. RESULTS The mean BMI standard deviation ...

2017
Benjamin Baguune, Joyce Aputere Ndago, Martin Nyaaba Adokiya,

BACKGROUND Immunization against diseases is one of the most important public health interventions with cost effective means to preventing childhood morbidity, mortality and disability. However, a proportion of children particularly in Africa are not fully immunized with the recommended vaccines. Thus, many children are still susceptible to the Expanded Program on Immunization (EPI) targeted dis...

Journal: :CoRR 2015
Kishore Reddy Konda, Xavier Bouthillier, Roland Memisevic, Pascal Vincent,

Dropout is typically interpreted as bagging a large number of models sharing parameters. We show that using dropout in a network can also be interpreted as a kind of data augmentation in the input space without domain knowledge. We present an approach to projecting the dropout noise within a network back into the input space, thereby generating augmented versions of the training data, and we sh...

2018
John Vergel, Gustavo A Quintero, Andrés Isaza-Restrepo, Martha Ortiz-Fonseca, Catalina Latorre-Santos, Juan Mauricio Pardo-Oviedo,

The relationship between students' withdrawal and educational variables has generated a considerable number of publications. As the explosion of information in sciences and integration theories led to creating different curriculum designs, it has been assumed that differences among designs explain academic success and, therefore, students' retention. However, little attention has been given to ...

2014
Chee Cheow Lim, Nai Shyan Lai, Gim Heng Tan,

A highly current-efficient and fast transient response capacitor-free low-dropout regulator (LDO) for Systemon-Chip (SoC) applications is presented in this paper. The proposed architecture is implemented using 0.35 μm CMOS technology. The proposed circuit is based on differential transconductance and push-pull amplifier. Common mode feedback (CMFB) resistors and direct voltage spike detection u...