نتایج جستجو برای: sgd
تعداد نتایج: 1169 فیلتر نتایج به سال:
Optimization procedure is crucial to achieve desirable performance for speech recognition based on deep neural networks (DNNs). Conventionally, DNNs are trained by using mini-batch stochastic gradient descent (SGD) which is stable but prone to be trapped into local optimum. A recent work based on Nesterov’s accelerated gradient descent (NAG) algorithm is developed by merging the current momentu...
Article history: Received 28 July 2015 Received in revised form 19 January 2016 Accepted 20 January 2016 Available online 26 January 2016 Monterey Bay, California (CA) receives nutrients from multiple sources, including river discharge, upwelling of deep water, and submarine groundwater discharge (SGD). Here we evaluate the relative importance of these sources to Northern Monterey Bay with a mi...
OBJECTIVES To estimate the 3-month direct and indirect costs associated with osteoporotic fractures from both the hospital's and patient's perspectives in Singapore and to compare the cost between acute and prevalent osteoporotic fractures. METHODS Resource use and expenditure data were collected using interviewer-administered questionnaires at baseline and at a 3-month follow-up between July...
We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The p...
Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data...
The budding yeast, Saccharomyces cerevisiae, has been experimentally manipulated for several decades. Much of the information generated is available in the Saccharomyces Genome D a t a b a s e (SGD, http://www.yeastgenome.org/). SGD contains large datasets of both genomic and proteomic information, as well as tools for data analysis. This paper will highlight three datasets that are maintained ...
Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method t...
The mechanisms by which Shaoyao-Gancao decoction (SGD) inhibits the production of inflammatory cytokines in serum and brain tissue after cerebral ischemia-reperfusion (CI-RP) in rats were investigated. A right middle cerebral artery occlusion was used to induce CI-RP after which the rats were divided into model (n = 39), SGD (n = 28), clopidogrel (n = 25) and sham operated (n = 34) groups. The ...
This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares. This result is obtained by analyzing SGD as a stochastic process and by sharply characterizing the stationary covariance matrix of this process. The finite rate optimality characterization captures the constant factors and ad...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید