نتایج جستجو برای: augmented ε constrained method
تعداد نتایج: 1744844 فیلتر نتایج به سال:
A hybrid algorithm is devised to boost the performance of complete search on under-constrained problems. We suggest to use random variable selection in combination with restarts, augmented by a coarse-grained local search algorithm that learns favorable value heuristics over the course of several restarts. Numerical results show that this method can speedup complete search by orders of magnitude.
A hybrid algorithm is devised to boost the performance of complete search on under-constrained problems. We suggest to use random variable selection in combination with restarts, augmented by a coarse-grained local search algorithm that learns favorable value heuristics over the course of several restarts. Numerical results show that this method can speedup complete search by orders of magnitude.
مسأله بهینه سازی با قیود محدب بعنوان یکی از مسائل مهم و اساسی در بهینه سازی مطرح میباشد. مسائل برنامه ریزی خـطی حالــت خاصی از چنین مسائلی می باشند. در اینجا مسأله بهینه سازی غیرخــطی با قیود محدب، در حالتی بیان میشود که قیود محدب را بصورت قیود کراندار ساده داشته باشیم. یعنی، تنها متغیرهای مسأله از بالا و پایین کراندار باشند. در چند دهه اخیر روش های بسیاری (روش های یکنوا و غیریکنوا ) برای حل ای...
We show how to exploit the structure inherent in the linear algebra for constrained nonlinear optimizaüon problems when inequality constraints have been converted to equations by adding slack variables and the problem is solved using an augmented Lagrangian method. AMS Subject Classification: 65K05, 90C30
A hybrid algorithm is devised to boost the performance of complete search on under-constrained problems. We suggest to use random variable selection in combination with restarts, augmented by a coarse-grained local search algorithm that learns favorable value heuristics over the course of several restarts. Numerical results show that this method can speedup complete search by orders of magnitude.
In this paper we introduced and analyzed the Log-Sigmoid (LS) multipliers method for constrained optimization. The LS method is to the recently developed smoothing technique as augmented Lagrangian to the penalty method or modified barrier to classical barrier methods. At the same time the LS method has some specific properties, which make it substantially different from other nonquadratic augm...
We propose in this paper an inexact dual gradient algorithm based on augmented Lagrangian theory and inexact information for the values of dual function and its gradient. We study the computational complexity certification of the proposed method and we provide estimates on primal and dual suboptimality and also on primal infeasibility. We also discuss implementation aspects of the proposed algo...
In this paper, we consider a class of structured nonsmooth difference-of-convex (DC) constrained DC programs in which the first convex component objective and constraints is sum smooth function, their second supremum finitely many functions. The existing methods for problem usually have weak convergence guarantee or require feasible initial point. Inspired by recent work Pang et al. [Pang J-S, ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید