Quantum gradient descent and Newton’s method for constrained polynomial optimization
نویسندگان
چکیده
منابع مشابه
A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establis...
متن کاملA New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
متن کاملFast gradient descent method for Mean-CVaR optimization
We propose an iterative gradient descent procedure for computing approximate solutions for the scenario-based mean-CVaR portfolio selection problem. This procedure is based on an algorithm proposed by Nesterov [13] for solving non-smooth convex optimization problems. Our procedure does not require any linear programming solver and in many cases the iterative steps can be solved in closed form. ...
متن کاملA Gradient Descent Method for Optimization of Model Microvascular Networks
Within animals, oxygen exchange occurs within networks containing potentially billions of microvessels that are distributed throughout the animal’s body. Innovative imaging methods now allow for mapping of the architecture and blood flows within real microvascular networks. However, these data streams have so far yielded little new understanding of the physical principles that underlie the orga...
متن کاملA hybrid steepest descent method for constrained convex optimization
This paper describes a hybrid steepest descent method to decrease over time any given convex cost function while keeping the optimization variables into any given convex set. The method takes advantage of properties of hybrid systems to avoid the computation of projections or of a dual optimum. The convergence to a global optimum is analyzed using Lyapunov stability arguments. A discretized imp...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: New Journal of Physics
سال: 2019
ISSN: 1367-2630
DOI: 10.1088/1367-2630/ab2a9e