نتایج جستجو برای: stationary points
تعداد نتایج: 319338 فیلتر نتایج به سال:
Assume that a stochastic process can be approximated, when some scale parameter gets large, by a fluid limit (also called “mean field limit”, or “hydrodynamic limit”). A common practice, often called the “fixed point approximation” consists in approximating the stationary behaviour of the stochastic process by the stationary points of the fluid limit. It is known that this may be incorrect in g...
Two variants of the extended Rosenbrock function are analyzed in order to find the stationary points. The first variant is shown to possess a single stationary point, the global minimum. The second variant has numerous stationary points for high dimensionality. A previously proposed method is shown to be numerically intractable, requiring arbitrary precision computation in many cases to enumera...
This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are...
A globally convergent homotopy method is deened that is capable of sequentially producing large numbers of stationary points of the multi-layer perceptron mean-squared error surface. Using this algorithm large subsets of the stationary points of two test problems are found. It is shown empirically that the MLP neural network appears to have an extreme ratio of saddle points compared to local mi...
A globally convergent homotopy method is defined that is capable of sequentially producing large numbers of stationary points of the multi-layer perceptron mean-squared error surface. Using this algorithm large subsets of the stationary points of two test problems are found. It is shown empirically that the MLP neural network appears to have an extreme ratio of saddle points compared to local m...
We consider two merit functions which can be used for solving the nonlinear complementarity problem via nonnegatively constrained minimization. One of the functions is the restricted implicit Lagrangian (Refs. 1-3), and the other appears to be new. We study the conditions under which a stationary point of the minimization problem is guaranteed to be a solution of the underlying complementarity ...
In this note we discuss the convergence of Newton’s method for minimization. We present examples in which the Newton iterates satisfy the Wolfe conditions and the Hessian is positive definite at each step and yet the iterates converge to a non-stationary point. These examples answer a question posed by Fletcher in his 1987 book Practical methods of optimization.
We study conditions under which line search Newton methods for nonlinear systems of equations and optimization fail due to the presence of singular non stationary points These points are not solutions of the problem and are characterized by the fact that Jacobian or Hessian matrices are singular It is shown that for systems of nonlinear equations the interaction between the Newton direction and...
The Perceptron i s an adaptive linear combiner that has its output quantized to one o f two possible discrete values, and i t is the basic component of multilayer, feedforward neural networks. The leastmean-square (LMS) adaptive algorithm adjusts the internal weights to train the network to perform some desired function, such as pattern recognition. In this paper, we present an analysis o f the...
A scalar Allen-Cahn-MPEC problem is considered and a penalization technique is applied to show the existence of an optimal control. We show that the stationary points of the penalized problems converge to weak stationary points of the limit problem.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید