We describe a new function approximation framework based on a continuous extension of the tensor-train decomposition. The approximation, termed a function-train (FT), results in a tensor-train structure whose cores are… Click to show full abstract
We describe a new function approximation framework based on a continuous extension of the tensor-train decomposition. The approximation, termed a function-train (FT), results in a tensor-train structure whose cores are univariate functions. An advantage of the FT over discrete approaches is that it produces an adaptive approximation of tensor fibers that is not tied to any tensorized discretization procedure; indeed, the algorithm can be coupled with any univariate linear or nonlinear approximation procedure. Furthermore, the representation of low-rank functions in FT format enables efficient continuous computation: we can add, multiply, integrate, and differentiate functions in polynomial time with respect to dimension. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. Our contributions include creating an algorithm for finding the maximum volume submatrix of a matrix-valued function, a maximum-volume based cross approximation algorithm to obtain skeleton decompositions of vector-valued functions, a cross approximation algorithm for converting black-box functions into FT format, and a continuous rounding algorithm that re-approximates an FT by one of lower ranks. We demonstrate the benefits of our approach by integrating high-dimensional and discontinuous functions, and apply it to a variety of approximation tasks.
               
Click one of the above tabs to view related content.