Graph Neural Networks

Tutorial at the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025)

GNNs have emerged as a crucial tool for machine learning on graphs and have been a rapidly growing topic in both fundamental research and industry applications. Indeed, as much of the 2010s were the time of convolutional neural networks (CNNs) applied to learning from images and time series data, the 2020s are shaping up to be the time of GNNs for learning on graphs in addition to attention-based architectures such as graph transformers. This is the right time to learn about GNNs and their potential power in processing large-scale graph-structured data over an extensive array of applications. With the emergence of large-scale systems, deploying scalable solutions with GNNs is a natural choice, along with the need to understand their fundamental properties, which can guide the practical design of GNNs.

In this tutorial, we present GNNs as generalizations of CNNs, which hinge upon generalizing convolutions in time and space to convolutions on graph data. We will emphasize how the use of graph convolutions enables scalability to high-dimensional and non-Euclidean problems. We also explore four fundamental properties of GNNs: equivariance to permutations, stability to deformations, transferability across scales, and generalization to unseen data. To illustrate the excellent performance of GNNs, we will showcase their use in diverse applications, spanning recommendation systems, communication networks, bioinformatics, autonomous multi-agent systems, and data manifolds. Ultimately, this tutorial will provide a broad overview of convolutional information processing on graphs and enable participants to tackle a variety of graph machine learning problems.

Tutorial Description

The tutorial is organized into four modules, in which we will explore the foundational principles of GNNs, enabling the understanding of GNNs’ properties and providing guidance in practice. The first module describes the general setup of machine learning problems on graphs and introduces graph convolutional filters and GNNs. The second module describes the stability properties of graph filters and GNNs, along with their biological applications. The third module presents graphons, graphon neural networks, and their use in analyzing the transferability of GNNs in the limit of large-scale graphs. The fourth module introduces convolutional filters and neural networks on manifolds and their use in analyzing the statistical generalization of GNNs.

Machine Learning on Graphs: Graph Convolutions and Graph Neural Networks

Download Slides

We first describe how graphs can represent pairwise similarities in various applications with a common feature: they have data on their nodes for information extraction. We can formulate ERM problems to learn these data-to-information maps on graph-structured data. GNNs are parameterizations of learning problems, specifically ERM problems, that achieve this goal.

We then introduce the definition of graph convolution, a polynomial on a matrix representation of the graph. Out of this definition, we build a graph perceptron by adding a pointwise nonlinear function to process the output of a graph convolutional filter. Graph perceptrons are composed (or layered) to build a multilayer GNN; and individual layers are augmented from single filters to filter banks in order to build multiple-feature GNNs.

We further extend the definition of graph convolution to graph transformers.

  • Applications: We use recommendation systems to show how graphs can model practical scenarios. Wireless networks are then used to illustrate that learning over graphs can enhance the performance of cyberphysical systems.

Equivariance and Stability to Deformations

Download Slides

We first show that graph convolutional filters and GNNs are equivariant to graph permutations. This implies that graph nodes with similar neighbor sets and observations perform the same operations, which in turn explains why graph filters outperform linear transforms and why GNNs outperform fully-connected neural networks.

Next, we show that graph convolutional filters and GNNs are stable to perturbations. Since GNNs possess better stability than graph filters at the same level of discriminability, this further explains why GNNs outperform graph filters.

Finally, we briefly introduce applications of GNNs in clinical and biological scenarios.

  • Applications: We show how GNNs, including graph transformers, outperform linear and fully-connected neural networks in clinical and biological settings, such as deep learning on electronic health records (EHRs), molecules, and proteins.

Graphon Neural Networks and Transferability at Scale

Download Slides

A graphon is a bounded function defined on the unit square, representing the limit of a sequence of graphs as the number of nodes and edges approaches infinity. We introduce this concept and establish the convolution operation and a convolutional neural network architecture on graphons.

This framework provides tools and insights that facilitate the understanding of graph filters and GNNs when the graph has a large number of nodes. Using graphons, we can translate graph data processing into harmonic analysis on the unit interval, allowing us to exploit consolidated tools from functional analysis. As graphs converge to a limit object — the graphon, GNN outputs converge to those of their corresponding limit object — the graphon neural network.

The finite-sample statement of this convergence result shows that their transference error is bounded by the inverse of the graph sizes, demonstrating the transferability property of GNNs.

  • Applications: We show the transferability of GNNs across scales in flocking of multi-agent systems. Specifically, transferability allows training on a smaller representative subgraph and applying the trained GNN to a larger graph while ensuring consistent performance.

Manifold Neural Networks and Generalization Analysis

Download Slides

We introduce the manifold model to overcome the density limitation of graphs sampled from graphons and allow modeling graphs with geometric or topological information. We define convolution operations over the manifold and, by establishing a a convolutional manifold neural network architecture, build the connection between neural networks on graphs and manifolds.

We then introduce and discuss generalization results for GNNs on graphs sampled from a manifold, which are a consequence of the fact that GNNs converge to the manifold neural network. Additionally, we demonstrate that this generalization is stable to limit model mismatch, which attests to the robustness of GNN generalization

  • Applications: We use citation networks as empirical support on node classification tasks and point cloud classification model as graph classification tasks. We further introduce image manifold as a high-dimensional manifold space to illustrate the generalization of GNNs.

Previous Delivery Experience

This tutorial is based on ESE 5140: Graph Neural Networks which has been offered at the University of Pennsylvania (Penn) five times. The course is popular and well regarded. The course receives average teaching evaluation scores of 3.84/4.00. Video lectures and slides are publicly available on the course website and have accumulated 122 thousand views and 4 thousand hours of watch time over the last four years. The website has been visited more than 800 thousand times.

Alejandro Ribeiro developed this course in 2020 and taught it in 2020, 2021, 2023 and 2024. The course contains two sessions capped at 50 students each. Luana Ruiz and Zhiyang Wang participated in the material development in 2020 and served as TAs in 2020 and 2021. Navid NaderiAlizadeh was a lecturer in 2022. A short course on Graph Neural Networks was taught by Navid NaderiAlizadeh, Alejandro Ribeiro and Luana Ruiz at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) in 2023, capped with 40 students.

Target Audience

This tutorial is of interest to a broad audience and has a low entry barrier. The former is true because it introduces the theory and practice of GNNs. The latter is true because it builds GNNs from basic data processing principles and is thus accessible to any researcher with knowledge of convolutions in spatial and spectral domains. Researchers will finish this tutorial with experience in formulating and training models with GNNs in practical problems, along with a good understanding of their fundamental properties.

Presenters’ Biographies

Navid NaderiAlizadeh (navid.naderi@duke.edu ) received the B.S. degree in electrical engineering from Sharif University of Technology in 2011, the M.S. degree in electrical and computer engineering from Cornell University, and the Ph.D. degree in electrical engineering from the University of Southern California. Upon graduating with his Ph.D., he spent four years as a Research Scientist at Intel Labs and HRL Laboratories. He is now an Assistant Research Professor in the Department of Biostatistics and Bioinformatics at Duke University. Prior to that, he was a Postdoctoral Researcher at the University of Pennsylvania. Navid’s current research interests span the foundations of machine learning, artificial intelligence, and signal processing and their applications in developing novel methods for analyzing biological data. Navid was a 2011 Irwin and Joan Jacobs Scholar and a 2015–16 Ming Hsieh Institute Ph.D. Scholar and a Shannon Centennial Student Competition finalist at Nokia Bell Labs in 2016. He has served as an Associate Editor for the IEEE JSAC and as a Guest Editor for the IEEE IoT Magazine. He was also a co-organizer of the MLSys 2023 Workshop on Resource-Constrained Learning in Wireless Networks and the ICML 2024 Workshop on Accessible and Efficient Foundation Models for Biological Discovery.

Alejandro Ribeiro (aribeiro@seas.upenn.edu) received the B.Sc. degree in electrical engineering from the Universidad de la Republica Oriental del Uruguay in 1998 and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Minnesota in 2005 and 2007. He joined the University of Pennsylvania (Penn) in 2008 where he is currently Professor of Electrical and Systems Engineering. His research is in wireless autonomous networks, machine learning on network data and distributed collaborative learning. Papers coauthored by Dr. Ribeiro received the 2022 IEEE Signal Processing Society Best Paper Award, the 2022 IEEE Brain Initiative Student Paper Award, the 2021 Cambridge Ring Publication of the Year Award, the 2020 IEEE Signal Processing Society Young Author Best Paper Award, the 2014 O. Hugo Schuck best paper award, and paper awards at EUSIPCO, IEEE ICASSP, IEEE CDC, IEEE SSP, IEEE SAM, Asilomar SSC Conference, and ACC. His teaching has been recognized with the 2017 Lindback award for distinguished teaching and the 2012 S. Reid Warren, Jr. Award presented by Penn’s undergraduate student body for outstanding teaching. Dr. Ribeiro received an Outstanding Researcher Award from Intel University Research Programs in 2019. He is a Penn Fellow class of 2015 and a Fulbright scholar class of 2003.

Luana Ruiz (lrubini1@jh.edu) received the Ph.D. degree in electrical engineering from the University of Pennsylvania in 2022, and the M.Eng. and B.Eng. double degree in electrical engineering from the École Supérieure d’Electricité and the University of São Paulo in 2017. She is an Assistant Professor with the Department of Applied Mathematics and Statistics and the MINDS and DSAI Institutes at Johns Hopkins University, as well as the Electrical and Computer Engineering and Computer Science departments. Luana’s work focuses on large-scale graph information processing and graph neural network architectures. She was awarded an Eiffel Excellence scholarship from the French Ministry for Europe and Foreign Affairs between 2013 and 2015; nominated an iREDEFINE fellow in 2019, a MIT EECS Rising Star in 2021, a Simons Research Fellow in 2022, and a METEOR fellow in 2023; and received best student paper awards at the 27th and 29th European Signal Processing Conferences. Luana is currently a member of the Machine Learning for Signal Processing Technical Committee of the IEEE Signal Processing Society.

Zhiyang Wang (zhiyangw@seas.upenn.edu) is a final-year Ph.D. candidate at the University of Pennsylvania in the Electrical and Systems Engineering Department, advised by Prof. Alejandro Ribeiro. Previously, she received her B.E. and M.E. degrees in 2016 and 2019, respectively, from the Department of Electronic Engineering and Information Science, University of Science and Technology of China. Her research interests include graph signal processing, geometric deep learning, manifold neural networks, and wireless communications. She received the best student paper award at the 29th European Signal Processing Conference and the Bruce Ford Memorial Fellowship at the University of Pennsylvania. She was nominated as a Rising Star in EECS, Signal Processing in 2023, and a Rising Star in Data Science in 2024.