نتایج جستجو برای: rank k numerical hulls
تعداد نتایج: 763244 فیلتر نتایج به سال:
In this talk I will discuss some instances in quantum computing where numerical range techniques arise. I will also try to formulate some open problems. Elliptical range theorems for generalized numerical ranges of quadratic operators Speaker Chi-Kwong Li, William and Mary, [email protected] Co-authors Yiu-Tung Poon, Iowa State University, [email protected]; Nung-Sing Sze, University of Connect...
Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. This dissertation extends the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix compl...
For any n-by-n complex matrix A and any k, 1 ≤ k ≤ n, let Λk(A) = {λ ∈ C : X∗AX = λIk for some n-by-k X satisfying X∗X = Ik} be its rank-k numerical range. It is shown that if A is an n-by-n contraction, then Λk(A) = ∩{Λk(U) : U is an (n + dA)-by-(n + dA) unitary dilation of A}, where dA = rank (In − A∗A). This extends and refines previous results of Choi and Li on constrained unitary dilations...
The classical numerical range of a quadratic operator is an elliptical disk. This result is extended to different kinds of generalized numerical ranges. In particular, it is shown that for a given quadratic operator, the rank-k numerical range, the essential numerical range, and the q-numerical range are elliptical disks; the c-numerical range is a sum of elliptical disks, and the Davis-Wieland...
Consider a system of linear algebraic equations with a nonsingular n by n matrix A. When solving this system with GMRES, the relative residual norm at the step k is bounded from above by the so called ideal GMRES approximation. This bound is sharp (it is attainable by the relative GMRES residual norm) in case of a normal matrix A, but it need not characterize the worstcase GMRES behavior if A i...
Existing routines, such as xGELSY or xGELSD in LAPACK, for solving rank-deficient least squares problems require O(mn) operations to solve min ‖b−Ax‖ where A is an m by n matrix. We present a modification of the LAPACK routine xGELSY that requires O(mnk) operations where k is the effective numerical rank of the matrix A. For low rank matrices the modification is an order of magnitude faster tha...
Selecting a small informative subset from a given dataset, also called column sampling, has drawn much attention in machine learning. For incorporating structured data information into column sampling, research efforts were devoted to the cases where data points are fitted with clusters, simplices, or general convex hulls. This paper aims to study nonconvex hull learning which has rarely been i...
In this note we prove that if K is a compact set of m×n matrices containing an isolated point X with no rank-one connection into the convex hull of K \ {X}, then the rank-one convex hull separates as K = ( K \ {X} )rc ∪ {X}. This is an extension of a result of P. Pedregal, which holds for 2× 2 matrices.
The nearest point problem (NPP), i.e., finding the closest points between two disjoint convex hulls, has two classical solutions, the Gilbert–Schlesinger–Kozinec (GSK) and Mitchell–Dem’yanov–Malozemov (MDM) algorithms. When the convex hulls do intersect, NPP has to be stated in terms of reduced convex hulls (RCHs), made up of convex pattern combinations whose coefficients are bound by a mo1 val...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید