نتایج جستجو برای: zeroth order deformation equation

تعداد نتایج: 1160510  

Journal: :Symmetry 2021

Materials with nanoscale phase separation are considered. A system representing a heterophase mixture of ferromagnetic and paramagnetic phases is studied. After averaging over configurations, renormalized Hamiltonian derived describing the coexisting phases. The characterized by direct exchange interactions an external magnetic field. properties studied numerically. stability conditions define ...

Journal: :IEEE Transactions on Information Forensics and Security 2022

We develop a privacy-preserving distributed algorithm to minimize regularized empirical risk function when the first-order information is not available and data over multi-agent network. employ zeroth-order method associated augmented Lagrangian in primal domain using alternating direction of multipliers (ADMM). show that proposed algorithm, named ADMM (D-ZOA), has intrinsic properties. Most ex...

Journal: :Optimization Letters 2021

In this paper, we prove new complexity bounds for zeroth-order methods in non-convex optimization with inexact observations of the objective function values. We use Gaussian smoothing approach Nesterov and Spokoiny(Found Comput Math 17(2): 527–566, 2015. https://doi.org/10.1007/s10208-015-9296-2 ) extend their results, obtained smooth problems, to setting minimization functions Hölder-continuou...

The present note is devoted to establish some extremal results for the zeroth-order general Randi'{c} index of cacti, characterize the extremal polyomino chains with respect to the aforementioned index, and hence to generalize two already reported results.

Journal: :SIAM Journal of Applied Mathematics 2014
Florian Schneider Graham W. Alldredge Martin Frank Axel Klar

We study mixed-moment models (full zeroth moment, half higher moments) for a Fokker–Planck equation in one space dimension. Mixed-moment minimum-entropy models are known to overcome the zero net-flux problem of full-moment minimum-entropy Mn models. A realizability theory for these mixed moments of arbitrary order is derived, as well as a new closure, which we refer to as Kershaw closure. They ...

Journal: :Computational Optimization and Applications 2021

Abstract In this paper, we consider stochastic weakly convex optimization problems, however without the existence of a subgradient oracle. We present derivative free algorithm that uses two point approximation for computing gradient estimate smoothed function. prove convergence at similar rate as state art methods, with larger constant, and report some numerical results showing effectiveness ap...

1994
Marcello Lissia

Within the framework of the operator product expansion (OPE) and the renormalization group equation (RGE), we show that the temperature and chemical potential dependence of the zeroth moment of a spectral function (SF) is completely determined by the one-loop structure of an asymptotically free theory. This exact result constrains the shape of SF’s, and implies a highly non-trivial functional f...

2016
Xiangru Lian Huan Zhang Cho-Jui Hsieh Yijun Huang Ji Liu

Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the...

2016
Alexandros Syrakos Stylianos Varchanis Yannis Dimakopoulos Apostolos Goulas John Tsamopoulos

The discretisation of the gradient operator is an important aspect of finite volume methods that has not received as much attention as the discretisation of other terms of partial differential equations. The most popular gradient schemes are the divergence theorem (or Green-Gauss) scheme, and the least-squares scheme. Both schemes are generally believed to be second-order accurate, but the pres...

Journal: :IEEE Transactions on Signal Processing 2022

Federated learning (FL), as an emerging edge artificial intelligence paradigm, enables many devices to collaboratively train a global model without sharing their private data. To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order second-order methods. However, these cannot be applied in scenarios where gradient information is not available, e....

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید