نتایج جستجو برای: sgd
تعداد نتایج: 1169 فیلتر نتایج به سال:
PURPOSE To estimate the economic cost of myopia among adults aged 40 years and older in Singapore. METHODS A substudy of 113 Singaporean adults aged 40 years and older with myopia (spherical equivalent refraction of at least -0.5 diopters) in the population-based ancillary study of Singapore Chinese Eye Study (SCES) was conducted. A health expenditure questionnaire was used to assess the dire...
We study the properties of the endpoint of stochastic gradient descent (SGD). By approximating SGD as a stochastic differential equation (SDE) we consider the Boltzmann-Gibbs equilibrium distribution of that SDE under the assumption of isotropic variance in loss gradients. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – cont...
We investigated the role submarine groundwater discharge (SGD) plays in the delivery of nutrients and copper to the Elizabeth River (Virginia) estuary, a major subestuary of lower Chesapeake Bay. Using an approach based on radium isotopes, we concluded that two distinct sources of groundwater were equally impacting the estuary: a surface (marsh) aquifer and deep aquifer source each with a uniqu...
Heat shock proteins (HSPs) are critical for adaptation to hypoxia and/or ischemia. Previously, we demonstrated that cobalt chloride (CoCl2), a well-known hypoxia mimetic agent, is an inducer of HSP90. In the present study, we tested the hypothesis that CoCl₂-induced upregulation of HSP90 is able to provide cardioprotection in serum and glucose-deprived H9c2 cardiomyocytes (H9c2 cells). Cell via...
The SGD-QN algorithm described in (Bordes et al., 2009) contains a subtle flaw that prevents it from reaching its design goals. Yet the flawed SGD-QN algorithm has worked well enough to be a winner of the first Pascal Large Scale Learning Challenge (Sonnenburg et al., 2008). This document clarifies the situation, proposes a corrected algorithm, and evaluates its performance.
Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations.
Stochastic gradient descent (SGD) is a well-known method for regression and classification tasks. However, it is an inherently sequential algorithm — at each step, the processing of the current example depends on the parameters learned from the previous examples. Prior approaches to parallelizing SGD, such as HOGWILD! and ALLREDUCE, do not honor these dependences across threads and thus can pot...
"Specific granule" deficiency (SGD) has been previously associated with lactoferrin deficiency. The antimicrobial peptides termed defensins, comprising 30% of normal primary granule proteins, have also been shown to be markedly deficient in SGD. The present study was undertaken to correlate these findings with ultrastructural morphometric analysis and peroxidase cytochemistry. Peroxidase-positi...
Submarine groundwater discharge (SGD) into the ocean is of general interest because it acts as vehicle for the transport of dissolved contaminants and/or nutrients into the coastal sea and because it may be accompanied by the loss of significant volumes of freshwater. Due to the large-scale and long-term nature of the related hydrological processes, environmental tracers are required for SGD in...
Recent advances in optimization methods used for training convolutional neural networks (CNNs) with kernels, which are normalized according to particular constraints, have shown remarkable success. This work introduces an approach for training CNNs using ensembles of joint spaces of kernels constructed using different constraints. For this purpose, we address a problem of optimization on ensemb...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید