Parallel Algorithms for Computing the Tensor-Train Decomposition

نویسندگان

چکیده

The tensor-train (TT) decomposition expresses a tensor in data-sparse format used molecular simulations, high-order correlation functions, and optimization. In this paper, we propose four parallelizable algorithms that compute the TT from various inputs: (1) Parallel-TTSVD for traditional format, (2) PSTT its variants streaming data, (3) Tucker2TT Tucker (4) TT-fADI solutions of Sylvester equations. We provide theoretical guarantees accuracy, parallelization methods, scaling analysis, numerical results. For example, -dimension , two-sided sketching algorithm PSTT2 is shown to have memory complexity improving upon previous algorithms.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Tensor Networks for Latent Variable Analysis. Part I: Algorithms for Tensor Train Decomposition

Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network decomposition has been long st...

متن کامل

A Randomized Tensor Train Singular Value Decomposition

The hierarchical SVD provides a quasi-best low rank approximation of high dimensional data in the hierarchical Tucker framework. Similar to the SVD for matrices, it provides a fundamental but expensive tool for tensor computations. In the present work we examine generalizations of randomized matrix decomposition methods to higher order tensors in the framework of the hierarchical tensors repres...

متن کامل

Tensor Train decomposition on TensorFlow (T3F)

Tensor Train decomposition is used across many branches of machine learning, but until now it lacked an implementation with GPU support, batch processing, automatic differentiation, and versatile functionality for Riemannian optimization framework, which takes in account the underlying manifold structure in order to construct efficient optimization methods. In this work, we propose a library th...

متن کامل

Computing the Gradient in Optimization Algorithms for the CP Decomposition in Constant Memory through Tensor Blocking

The construction of the gradient of the objective function in gradient-based optimization algorithms for computing an r-term CANDECOMP/PARAFAC (CP) decomposition of an unstructured dense tensor is a key computational kernel. The best technique for efficiently implementing this operation has a memory consumption that scales linearly with the number of terms r and sublinearly with the number of e...

متن کامل

Parallel Algorithms for the Singular Value Decomposition

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Scientific Computing

سال: 2023

ISSN: ['1095-7197', '1064-8275']

DOI: https://doi.org/10.1137/21m146079x