We estimate the expressive power of certain deep neural networks (DNNs for short) on a class of countably-parametric, holomorphic maps [Formula: see text] on the parameter domain [Formula: see text].… Click to show full abstract
We estimate the expressive power of certain deep neural networks (DNNs for short) on a class of countably-parametric, holomorphic maps [Formula: see text] on the parameter domain [Formula: see text]. Dimension-independent rates of best [Formula: see text]-term truncations of generalized polynomial chaos (gpc for short) approximations depend only on the summability exponent of the sequence of their gpc expansion coefficients. So-called [Formula: see text]-holomorphic maps [Formula: see text], with [Formula: see text] for some [Formula: see text], are known to allow gpc expansions with coefficient sequences in [Formula: see text]. Such maps arise for example as response surfaces of parametric PDEs, with applications in PDE uncertainty quantification (UQ) for many mathematical models in engineering and the sciences. Up to logarithmic terms, we establish the dimension independent approximation rate [Formula: see text] for these functions in terms of the total number [Formula: see text] of units and weights in the DNN. It follows that certain DNN architectures can overcome the curse of dimensionality when expressing possibly countably-parametric, real-valued maps with a certain degree of sparsity in the sequences of their gpc expansion coefficients. We also obtain rates of expressive power of DNNs for countably-parametric maps [Formula: see text], where [Formula: see text] is the Hilbert space [Formula: see text].
               
Click one of the above tabs to view related content.