Approximation Rates and Metric Entropy of Shallow Neural Networks
Jonathan Siegel
Pennsylvania State University
Abstract:
We consider the problem of approximating high dimensional functions using shallow neural networks, and more generally by sparse linear combinations of elements of a dictionary. We begin by introducing natural spaces of functions which can be efficiently approximated in this way. Then, we derive the metric entropy of the unit balls in these spaces, which allows us to calculate optimal approximation rates for approximation by shallow neural networks. This gives a precise measure of how large this class of functions is and how well shallow neural networks overcome the curse of dimensionality. Finally, we describe an algorithm which can be used to solve high-dimensional PDEs using this space of functions.
Tuesday, May 11, 2021
11:00AM Zoom ID 939 3177 8552