Linear convergence of Frank–Wolfe for rank-one matrix recovery without strong convexity

نویسندگان

چکیده

We consider convex optimization problems which are widely used as relaxations for low-rank matrix recovery problems. In particular, in several important problems, such phase retrieval and robust PCA, the underlying assumption many cases is that optimal solution rank-one. this paper we a simple natural sufficient condition on objective so to these indeed unique Mainly, show under condition, standard Frank–Wolfe method with line-search (i.e., without any tuning of parameters whatsoever), only requires single rank-one SVD computation per iteration, finds an $$\epsilon $$ -approximated $$O(\log {1/\epsilon })$$ iterations (as opposed previous best known bound $$O(1/\epsilon )$$ ), despite fact not strongly convex. variants basic improved complexities, well extension motivated by finally, nonsmooth

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Linear Convergence of Variance-Reduced Stochastic Gradient without Strong Convexity

Stochastic gradient algorithms estimate the gradient based on only one or a fewsamples and enjoy low computational cost per iteration. They have been widelyused in large-scale optimization problems. However, stochastic gradient algo-rithms are usually slow to converge and achieve sub-linear convergence rates,due to the inherent variance in the gradient computation. To accelerate...

متن کامل

ROP: Matrix Recovery via Rank-One Projections

Estimation of low-rank matrices is of significant interest in a range of contemporary applications. In this paper, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small low-rank perturbations. Both u...

متن کامل

Low-rank matrix recovery via rank one tight frame measurements

The task of reconstructing a low rank matrix from incomplete linear measurements arises in areas such as machine learning, quantum state tomography and in the phase retrieval problem. In this note, we study the particular setup that the measurements are taken with respect to rank one matrices constructed from the elements of a random tight frame. We consider a convex optimization approach and s...

متن کامل

Low rank matrix recovery from rank one measurements

We study the recovery of Hermitian low rank matrices X ∈ Cn×n from undersampled measurements via nuclear norm minimization. We consider the particular scenario where the measurements are Frobenius inner products with random rank-one matrices of the form ajaj for some measurement vectors a1, . . . , am, i.e., the measurements are given by yj = tr(Xaja ∗ j ). The case where the matrix X = xx ∗ to...

متن کامل

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity

We consider the convex-concave saddle point problem minx maxy f(x) + y>Ax− g(y) where f is smooth and convex and g is smooth and strongly convex. We prove that if the coupling matrix A has full column rank, the vanilla primaldual gradient method can achieve linear convergence even if f is not strongly convex. Our result generalizes previous work which either requires f and g to be quadratic fun...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2022

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-022-01821-8