نتایج جستجو برای: sgd

تعداد نتایج: 1169  

2010
Stacia R. Engel Rama Balakrishnan Gail Binkley Karen R. Christie Maria C. Costanzo Selina S. Dwight Dianna G. Fisk Jodi E. Hirschman Benjamin C. Hitz Eurie L. Hong Cynthia J. Krieger Michael S. Livstone Stuart R. Miyasato Robert S. Nash Rose Oughtred Julie Park Marek S. Skrzypek Shuai Weng Edith D. Wong Kara Dolinski David Botstein J. Michael Cherry

The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is a scientific database for the molecular biology and genetics of the yeast Saccharomyces cerevisiae, which is commonly known as baker's or budding yeast. The information in SGD includes functional annotations, mapping and sequence information, protein domains and structure, expression data, mutant phenotypes, physical and gen...

Journal: :CoRR 2018
Chen Xing Devansh Arpit Christos Tsirigotis Yoshua Bengio

Exploring why stochastic gradient descent (SGD) based optimization methods train deep neural networks (DNNs) that generalize well has become an active area of research. Towards this end, we empirically study the dynamics of SGD when training over-parametrized DNNs. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from co...

2018
Pratik Chaudhari Stefano Soatto

Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in g...

2016
Conghui Tan Shiqian Ma Yu-Hong Dai Yuqiu Qian

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step size by hand. Apparently, these two approaches can be time consum...

2016
Seyed Hadi Mousavi Behnaz Naghizade Solmaz Pourgonabadi Ahmad Ghorbani

OBJECTIVE Oxidative stress plays a key role in the pathophysiology of brain ischemia and neurodegenerative disorders. Previous studies indicated that Viola tricolor and Viola odorata are rich sources of antioxidants. This study aimed to determine whether these plants protect neurons against serum/glucose deprivation (SGD)-induced cell death in an in vitro model of ischemia and neurodegeneration...

Journal: :CoRR 2018
Robert D. Kleinberg Yuanzhi Li Yang Yuan

Stochastic gradient descent (SGD) is widely used in machine learning. Although being commonly viewed as a fast but not accurate version of gradient descent (GD), it always finds better solutions than GD for modern neural networks. In order to understand this phenomenon, we take an alternative view that SGD is working on the convolved (thus smoothed) version of the loss function. We show that, e...

Journal: :The Science of the total environment 2012
Chun Ming Lee Jiu Jimmy Jiao Xin Luo Willard S Moore

Tolo Harbour, located in the northeastern part of Hong Kong's New Territories, China, has a high frequency of algal blooms and red tides. An attempt was made to first quantify the submarine groundwater discharge (SGD) into Tolo Harbour using (226)Ra, and then to estimate the nutrient fluxes into the Harbour by this pathway. The total SGD was estimated to be 8.28×10(6) m(3) d(-1), while the fres...

Journal: :JoWUA 2016
István Hegedüs Árpád Berta Márk Jelasity

Stochastic gradient descent (SGD) is one of the most applied machine learning algorithms in unreliable large-scale decentralized environments. In this type of environment data privacy is a fundamental concern. The most popular way to investigate this topic is based on the framework of differential privacy. However, many important implementation details and the performance of differentially priv...

Journal: :CoRR 2017
Pratik Chaudhari Stefano Soatto

Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in g...

Journal: :CoRR 2015
Guillaume Bouchard Théo Trouillon Julien Perez Adrien Gaidon

Stochastic Gradient Descent (SGD) is one of the most widely used techniques for online optimization in machine learning. In this work, we accelerate SGD by adaptively learning how to sample the most useful training examples at each time step. First, we show that SGD can be used to learn the best possible sampling distribution of an importance sampling estimator. Second, we show that the samplin...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید