Efficient “Learn the Dynamics” Modules Are All You Need
PhD Thesis: The University of Wisconsin-Madison |
Many challenges in modern machine learning ranging from modeling long temporal sequences to implicit neural signal representations and generative models are governed by underlying physical processes or dynamical systems. These systems are naturally expressed mathematically through objects such as integral transforms, operators, and kernels. This dissertation investigates how neural networks can learn, approximate, and utilize such tools by drawing on ideas from integral formulations, dynamical systems, and multi-resolution analysis. The overarching theme is that operator-centric perspectives provide a coherent mathematical foundation for building scalable, stable, and efficient neural architectures. Guided by this viewpoint, the thesis introduces multi-resolution operator parameterizations for controlled differential equations, functional operator mappings for implicit neural representations, and learnable quadrature mechanisms that improve how neural networks approximate solutions of differential equations, including in physics-informed neural networks. These concepts further extend to large-scale neuro-imaging, where a dynamical systems framework yields an efficient foundation model for resting-state fMRI, and to generative modeling, where physics-inspired smoothing and energy principles shape more coherent and geometry-aware diffusion processes. Although the applications span diverse domains, they reflect a shared methodological principle: learning becomes more effective when neural network architectures reflect the structure associated with the governing physical or dynamical processes and the algorithm is informed by the choice of modality of the data and domain. This dissertation develops and unifies these ideas, presenting an operator-driven view that advances modeling of dynamics, solutions of differential equations, representation of high-dimensional signals, and guidance of modern generative models.