Reducing Training Complexity in Empirical Quadrature-Based Model Reduction via Structured Compression
Authors
Björn Liljegren-Sailer
Abstract
Model order reduction seeks to approximate large-scale dynamical systems by lower-dimensional reduced models. For linear systems, a small reduced dimension directly translates into low computational cost, ensuring online efficiency. This property does not generally hold for nonlinear systems, where an additional approximation of nonlinear terms -- known as complexity reduction -- is required. To achieve online efficiency, empirical quadrature and cell-based empirical cubature are among the most effective complexity reduction techniques. However, existing offline training algorithms can be prohibitively expensive because they operate on raw snapshot data of all nonlinear integrands associated with the reduced model. In this paper, we introduce a preprocessing approach based on a specific structured compression of the training data. Its key feature is that it scales only with the number of collected snapshots, rather than additionally with the reduced model dimension. Overall, this yields roughly an order-of-magnitude reduction in offline computational cost and memory requirements, thereby enabling the application of the complexity reduction methods to larger-scale problems. Accuracy is preserved, as indicated by our error analysis and demonstrated through numerical examples.