flying car

Autonomous Systems and Robotics Group

Open sourcing PDEArena

Share this page

PDEArena logo (opens in new tab)

We are open sourcing PDEArena, a modern, scalable, and easy to use PDE surrogate benchmarking framework. PDEArena is designed to train and evaluate neural surrogates for partial differential equations (PDE) at scale. As such, PDEArena contains state of the art implementations of more than 20 recently proposed PDE surrogate architectures (or combinations thereof), with more coming soon.  

Scaling up deep learning models has led to unprecedented success when it comes to computer vision or natural language processing. Deep learning holds immense promise in helping to overcome the computationally expensive nature of standard PDE solution techniques. However, scaling such PDE surrogate models requires elaborate engineering both on the distributed training as well as on the data loading front. For example, many currently available neural PDE open-source libraries tend to assume that the surrogates are run on exactly those underlying PDEs they were trained on, often assuming at most a single GPU. Current research in PDE surrogates is therefore often missing out on the benefits of scale. 

Thanks to the use of PyTorch Lightning, experiments are incredibly simple to run at any scale. In its current release version, PDEArena allows you to train models on four different fluid mechanics and electrodynamics datasets, where both code for data generation and the datasets themselves are available (with more coming soon). . Furthermore, PDEArena aims to establish strong baselines for neural PDE surrogates, and consequently driving the field forward together. Therefore, the repo is designed such that it can easily be extended both for new models and for new datasets.

We used PDEArena in our recent paper “Towards Multi-spatiotemporal-scale Generalized PDE Modeling” (opens in new tab) to compare modern UNets vs. other state of the art neural PDE surrogate learning approaches. PDEArena’s simplicity and scalability allowed us to quickly iterate on different UNet variants: from the 2015 version to modern UNets and our own variants thereof. Furthermore, we could easily compare various other tradeoffs like runtime and GPU memory requirements against other architectures like ResNets, Dilated ResNets, as well as various Fourier-based approaches. 

Table comparing different PDE surrogate models in terms of number of parameters, memory requirements and runtime.
Table 1: Comparison of parameter count, runtime, and memory requirements of various PDE surrogate architectures.
More can be found in the documentation (opens in new tab).

Trying out these models on a new PDE should be as simple as writing a data loader for your PDE dataset. Hopefully, we will see many more comparisons of the vast design space of PDE surrogates. 

Note that this is not a one-time release. We use PDEArena extensively in our daily research at Microsoft and plan to continue to maintain it, while adding new functionalities over time. We are very eager to receive contributions (opens in new tab) from the wider PDE surrogate learning community.


This work is being undertaken by members of Microsoft Autonomous Systems and Robotics Research and Microsoft Research AI4Science. The researchers behind this project are Jayesh K. Gupta and Johannes Brandstetter.