نتایج جستجو برای: augmented ε constraint method
تعداد نتایج: 1743918 فیلتر نتایج به سال:
The firstand second-order optimum achievable exponents in the simple hypothesis testing problem are investigated. The optimum achievable exponent for type II error probability, under the constraint that the type I error probability is allowed asymptotically up to ε, is called the ε-optimum exponent. In this paper, we first give the second-order ε-optimum exponent in the case where the null hypo...
This paper presents a computationally efficient method for solving generalized network problems with an additional linear constraint. The method is basically the primal simplex method specialized to exploit the topological structure of the problem. The method is similar to the specialization of Charnes and Cooper's Double Reverse Method by Meier, and Klingman and Russell for constrained pure ne...
a f(x) dx (which one would use for instance to compute the work required to move a particle from a to b). For simplicity we shall restrict attention here to functions f : R → R which are continuous on the entire real line (and similarly, when we come to differential forms, we shall only discuss forms which are continuous on the entire domain). We shall also informally use terminology such as “i...
In this paper, an augmented Lagrangian proximal alternating (ALPA) method is proposed for two class of large-scale sparse discrete constrained optimization problems in which a sequence of augmented Lagrangian subproblems are solved by utilizing proximal alternating linearized minimization framework and sparse projection techniques. Under the MangasarianFromovitz and the basic constraint qualifi...
Submodular-function maximization is a central problem in combinatorial optimization, generalizing many important NP-hard problems including Max Cut in digraphs, graphs and hypergraphs, certain constraint satisfaction problems, maximum-entropy sampling, and maximum facility-location problems. Our main result is that for any k ≥ 2 and any ε > 0, there is a natural local-search algorithm which has...
Necessary Optimality Conditions for Nonlinear Programming are discussed in the present research. A new Second-Order condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new co...
This paper presents a decomposition method for solving convex minimization problems. At each iteration, the algorithm computes two proximal steps in the dual variables and one proximal step in the primal variables. We derive this algorithm from Rockafellar's proximal method of multipliers, which involves an augmented Lagrangian with an additional quadratic proximal term. The algorithm preserves...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید