Microsoft Research Blog

Graph neural networks

  1. E2Former: An Efficient and Equivariant Transformer with Linear-Scaling Tensor Products 

    January 30, 2025

    Equivariant Graph Neural Networks (EGNNs) have demonstrated significant success in modeling microscale systems, including those in chemistry, biology and materials science. However, EGNNs face substantial computational challenges due to the high cost of constructing edge features via spherical tensor products, making them impractical for large-scale…

  2. Can Graph Learning Improve Task Planning? 

    December 1, 2024

    Task planning is emerging as an important research topic alongside the development of large language models (LLMs). It aims to break down complex user requests into solvable sub-tasks, thereby fulfilling the original requests. In this context, the sub-tasks can be naturally viewed as a graph,…

  3. Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs 

    September 25, 2024

    Geometric graph neural networks (GNNs) have emerged as powerful tools for modeling molecular geometry. However, they encounter limitations in effectively capturing long-range interactions in large molecular systems. To address this challenge, we introduce Neural P$^3$M, a versatile enhancer of geometric GNNs to expand the scope…

  4. FiGURe: Simple and Efficient Unsupervised Node Representations with Filter Augmentations 

    October 1, 2023

    Unsupervised node representations learnt using contrastive learning-based methods have shown good performance on downstream tasks. However, these methods rely on augmentations that mimic low-pass filters, limiting their performance on tasks requiring different eigen-spectrum parts. This paper presents a simple filter-based augmentation method to capture different…

  5. Graph Neural Networks for Wireless Communications: From Theory to Practice 

    November 1, 2022 | Yifei Shen, Jun Zhang, Shenghui Song, and Khaled Ben Letaief

    Deep learning-based approaches have been developed to solve challenging problems in wireless communications, leading to promising results. Early attempts adopted neural network architectures inherited from applications such as computer vision. They often yield poor performance in large scale networks (i.e., poor scalability) and unseen network…

  6. A Piece-wise Polynomial Filtering Approach for Graph Neural Networks 

    August 4, 2022

    Graph Neural Networks (GNNs) exploit signals from node features and the input graph topology to improve node classification task performance. Recently proposed GNNs work across a variety of homophilic and heterophilic graphs. Among these, models relying on polynomial graph filters have shown promise. We observe…

  7. A Piece-wise Polynomial Filtering Approach for Graph Neural Networks 

    May 1, 2022

    Graph Neural Networks (GNNs) exploit signals from node features and the input graph topology to improve node classification task performance. However, these models tend to perform poorly on heterophilic graphs, where connected nodes have different labels. Recently proposed GNNs work across graphs having varying levels…

  8. Ada-GNN: Adapting to Local Patterns for Improving Graph Neural Networks 

    February 1, 2022

    Graph Neural Networks (GNNs) have demonstrated strong power in mining various graph-structure data. Since real-world graphs are usually on a large scale, training scalable GNNs has become one of the research trends in recent years. Existing methods only produce one single model to serve all…

  9. Graph Pointer Neural Networks 

    January 3, 2022

    Graph Neural Networks (GNNs) have shown advantages in various graph-based applications. Most existing GNNs assume strong homophily of graph structure and apply permutation-invariant local aggregation of neighbors to learn a representation for each node. However, they fail to generalize to heterophilic graphs, where most neighboring…

  10. Adaptive Diffusion in Graph Neural Networks 

    November 1, 2021

    The success of graph neural networks (GNNs) largely relies on the process of aggregating information from neighbors defined by the input graph structures. Notably, message passing based GNNs, e.g., graph convolutional networks, leverage the immediate neighbors of each node during the aggregation process, and recently,…