نتایج جستجو برای: sgd

تعداد نتایج: 1169  

2016
Jen-Tzung Chien Pei-Wen Huang Tan Lee

Optimization procedure is crucial to achieve desirable performance for speech recognition based on deep neural networks (DNNs). Conventionally, DNNs are trained by using mini-batch stochastic gradient descent (SGD) which is stable but prone to be trapped into local optimum. A recent work based on Nesterov’s accelerated gradient descent (NAG) algorithm is developed by merging the current momentu...

2016
Alanna L. Lecher Andrew T. Fisher Adina Paytan

Article history: Received 28 July 2015 Received in revised form 19 January 2016 Accepted 20 January 2016 Available online 26 January 2016 Monterey Bay, California (CA) receives nutrients from multiple sources, including river discharge, upwelling of deep water, and submarine groundwater discharge (SGD). Here we evaluate the relative importance of these sources to Northern Monterey Bay with a mi...

Journal: :Value in health regional issues 2017
Charmaine Shuyu Ng Tang Ching Lau Yu Ko

OBJECTIVES To estimate the 3-month direct and indirect costs associated with osteoporotic fractures from both the hospital's and patient's perspectives in Singapore and to compare the cost between acute and prevalent osteoporotic fractures. METHODS Resource use and expenditure data were collected using interviewer-administered questionnaires at baseline and at a 3-month follow-up between July...

2016
Jianhui Chen Tianbao Yang Qihang Lin Lijun Zhang Yi Chang

We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The p...

2016
Qi Meng Wei Chen Jingcheng Yu Taifeng Wang Zhiming Ma Tie-Yan Liu

Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data...

Journal: :Advances in Dental Research 2011

2003
Qing Dong Rama Balakrishnan Gail Binkley Karen R. Christie Maria C. Costanzo Kara Dolinski Selina S. Dwight Stacia R. Engel Dianna G. Fisk Jodi E. Hirschman Eurie L. Hong Robert S. Nash Laurie Issel-Tarver Anand Sethuraman Chandra L. Theesfeld Shuai Weng David Botstein J. Michael Cherry

The budding yeast, Saccharomyces cerevisiae, has been experimentally manipulated for several decades. Much of the information generated is available in the Saccharomyces Genome D a t a b a s e (SGD, http://www.yeastgenome.org/). SGD contains large datasets of both genomic and proteomic information, as well as tools for data analysis. This paper will highlight three datasets that are maintained ...

Journal: :IEEE transactions on neural networks and learning systems 2017
Xi-Lin Li

Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method t...

2016
Ying Zhang Xinling Jia Jian Yang Qing Li Guofeng Yan Zhongju Xu Jingye Wang

The mechanisms by which Shaoyao-Gancao decoction (SGD) inhibits the production of inflammatory cytokines in serum and brain tissue after cerebral ischemia-reperfusion (CI-RP) in rats were investigated. A right middle cerebral artery occlusion was used to induce CI-RP after which the rats were divided into model (n = 39), SGD (n = 28), clopidogrel (n = 25) and sham operated (n = 34) groups. The ...

2017
Prateek Jain Sham M. Kakade Rahul Kidambi Praneeth Netrapalli Venkata Krishna Pillutla Aaron Sidford

This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares. This result is obtained by analyzing SGD as a stochastic process and by sharply characterizing the stationary covariance matrix of this process. The finite rate optimality characterization captures the constant factors and ad...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید