نتایج جستجو برای: lagrangian method
تعداد نتایج: 1646303 فیلتر نتایج به سال:
In this article, we present a novel meshfree framework for fluid flow simulations on arbitrarily curved surfaces. First, introduce new Lagrangian to model Meshfree points or particles, which are used discretize the domain, move in sense along given surface. This is done without discretizing bulk around surface, parametrizing and background mesh. A key novelty that introduced handling of with ev...
In this talk, we present a trust region method for solving equality constrained optimization problems, which is motivated by the famous augmented Lagrangian function. It is different from standard augmented Lagrangian methods where the augmented Lagrangian function is minimized at each iteration. This method, for fixed Lagrange multiplier and penalty parameters, tries to minimize an approximate...
Many contemporary signal processing, machine learning and wireless communication applications can be formulated as nonconvex nonsmooth optimization problems. Often there is a lack of efficient algorithms for these problems, especially when the optimization variables are nonlinearly coupled in some nonconvex constraints. In this work, we propose an algorithm named penalty dual decomposition (PDD...
We consider the inclusion of commitment of thermal generation units in the optimal management of the Brazilian power system. By means of Lagrangian relaxation we decompose the problem and obtain a nondifferentiable dual function that is separable. We solve the dual problem with a bundle method. Our purpose is twofold: first, bundle methods are the methods of choice in nonsmooth optimization whe...
This paper presents novel convergence results for the Augmented Lagrangian based Alternating Direction Inexact Newton method (ALADIN) in the context of distributed convex optimization. It is shown that ALADIN converges for a large class of convex optimization problems from any starting point to minimizers without needing line-search or other globalization routines. Under additional regularity a...
In this paper, an algorithm for sparse learning via Maximum Margin Matrix Factorization(MMMF) is proposed. The algorithm is based on L1 penality and Alternating Direction Method of Multipliers. It shows that with sparse factors, sparse factors method can obtain result as good as dense factors.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید