نتایج جستجو برای: gradient method

تعداد نتایج: 1723012  

2015
Nataša Krejić Nataša Krklec Jerinkić

We consider the Spectral Projected Gradient method for solving constrained optimization porblems with the objective function in the form of mathematical expectation. It is assumed that the feasible set is convex, closed and easy to project on. The objective function is approximated by a sequence of Sample Average Approximation functions with different sample sizes. The sample size update is bas...

2018
Shi Pu Angelia Nedi'c

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a dis...

2005
Lázaro Cárdenas

This paper proposes a method in order to render passive nonlinear affine-incontrol discrete-time systems. The methodology is based on the discrete-time version of the speed-gradient (SG) algorithm. For the application of the SG algorithm, quasiV -passive and feedback quasi-V -passive systems are introduced. Two kinds of feedback laws rendering the system locally quasi-V -passive are obtained: a...

2014
Georgy Kukharev Yuri Matveev Nadezhda Shchegoleva

We propose a method for generating standard type linear barcodes from facial images. Our method uses the difference in gradients of image brightness. It involves averaging the gradients into a limited number of intervals, quantization of the results into decimal digits from 0 to 9, and table conversion into the final barcode. The proposed solution is computationally low-cost and does not requir...

1993
Osamu Tatebe

multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wave length components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an e cient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition...

2008
R. J. Renka

The most effective methods for finding object boundaries in a digital image involve minimizing a functional over a set of curves or surfaces, where the functional includes internal energy terms for regularization and external energy terms that allign the curves or surfaces with object boundaries. Current practice is to seek critical points of the energy functional by what amounts to a steepest ...

2000
Kurt Konolige

Despite many decades of research into mobile robot control, reliable, high-speed motion in complicated, uncertain environments remains an unachieved goal. In this paper we present a solution to realtime motion control that can competently maneuver a robot at optimal speed even as it explores a new region or encounters new obstacles. The method uses a navigation function to generate a gradient f...

2014
Leon Wenliang Zhong James T. Kwok

Regularized risk minimization often involves nonsmooth optimization. This can be particularly challenging when the regularizer is a sum of simpler regularizers, as in the overlapping group lasso. Very recently, this is alleviated by using the proximal average, in which an implicitly nonsmooth function is employed to approximate the composite regularizer. In this paper, we propose a novel extens...

2014
Tamio Koyama Hiromasa Nakayama Katsuyoshi Ohara Tomonari Sei Nobuki Takayama

We present software packages for the holonomic gradient method (HGM). These packages compute normalizing constants and the probabilities of some regions. While many algorithms which compute integrals over high-dimensional regions utilize the Monte-Carlo method, our HGM utilizes algorithms for solving ordinary differential equations such as the Runge-Kutta-Fehlberg method. As a result, our HGM c...

Journal: :CoRR 2015
Masayuki Ohzeki

In this paper, we propose a novel technique to implement stochastic gradient methods, which are beneficial for learning from large datasets, through accelerated stochastic dynamics. A stochastic gradient method is based on mini-batch learning for reducing the computational cost when the amount of data is large. The stochasticity of the gradient can be mitigated by the injection of Gaussian nois...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید