نتایج جستجو برای: variable stepsize

تعداد نتایج: 259826  

Journal: :CoRR 2017
Zhi Li Ming Yan

We consider a primal-dual algorithm for minimizing f(x) + h(Ax) with differentiable f . The primal-dual algorithm has two names in literature: Primal-Dual Fixed-Point algorithm based on the Proximity Operator (PDFPO) and Proximal Alternating Predictor-Corrector (PAPC). In this paper, we extend it to solve f(x) + h l(Ax) with differentiable l and prove its convergence under a weak condition (i.e...

Journal: :Math. Comput. 2005
L. Ferracina M. N. Spijker

In the context of solving nonlinear partial differential equations, Shu and Osher introduced representations of explicit Runge-Kutta methods, which lead to stepsize conditions under which the numerical process is totalvariation-diminishing (TVD). Much attention has been paid to these representations in the literature. In general, a Shu-Osher representation of a given Runge-Kutta method is not u...

Journal: :SIAM Journal on Optimization 2014
Angelia Nedic Soomin Lee

This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...

Journal: :Math. Comput. 2011
Willem Hundsdorfer M. N. Spijker

In the literature, much attention has been paid to Runge-Kutta methods (RKMs) satisfying special nonlinear stability requirements indicated by the terms total-variation-diminishing (TVD), strong stability preserving (SSP) and monotonicity. Stepsize conditions, guaranteeing these properties, were derived by Shu and Osher [J. Comput. Phys., 77 (1988) pp. 439-471] and in numerous subsequent papers...

Journal: :Math. Program. 2011
Angelia Nedic

This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are ap...

2010
Ermin Wei Asuman Ozdaglar Ali Jadbabaie

Most existing work uses dual decomposition and first-order methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This paper develops an alternative distributed Newton-type fast converging algorithm for solving NUM problems with self-concordant utility functions. By using novel matrix splitting techniques, bot...

Journal: :Computers & OR 2012
Anthony Chen Zhong Zhou Xiangdong Xu

Gradient projection (GP) algorithm has been shown as an efficient algorithm for solving the traditional traffic equilibrium problem with additive route costs. Recently, GP has been extended to solve the nonadditive traffic equilibrium problem (NaTEP), in which the cost incurred on each route is not just a simple sum of the link costs on that route. However, choosing an appropriate stepsize, whi...

Journal: :Current Biology 2004
R. A Cross

A new optical trapping study shows that the stepsize of cytoplasmic dynein varies according to the applied force, suggesting that this motor can change gear. Complementary biochemical kinetic work on yeast dynein mutants hints at the allosteric mechanisms involved.

Journal: :Signal Processing 2010
Hicham Ghennioui Nadège Thirion-Moreau Eric Moreau Driss Aboutajdine

This article addresses the problem of the non-unitary joint block diagonalization of a given set of complex matrices. Two new algorithms are provided: the first is based on a classical gradient approach and the second is based on a relative gradient approach. For each algorithm, two versions are provided: the fixed stepsize and the optimal stepsize version. Computer simulations are provided to ...

Journal: :SIAM Journal on Optimization 2017
Max L. N. Gonçalves Jefferson G. Melo Renato D. C. Monteiro

This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new method is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity of the latter method. Our...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید