Further Reading

This course is based on the work on graph neural networks (GNNs) undertaken at Alelab, the signal processing lab at Penn. For a high level discussion of our ideas, you can check this tutorial article. that is to appear in the signal processing magazine. For a more comprehensive discussion this extended review submitted to the Proceedings of the IEEE is worth checking.

To dig deeper into the theoretical foundations of GNNs, our paper on stability properties of GNNs is the place to start. This seminal paper established stability of graph filters and GNNs to additive and relative perturbations of shift operators. It set the basis to understand stability vs discriminability tradeoffs and offer the first theoretical explanation of why GNNs are expected to outperform graph filters.

After reading about stability, we recommend that you check our work on transferability. This builds on the theory of Graphon Signal Processing.

Other interesting theoretical works include the study of more advanced GNN architectures such as EdgeNets and stochastic GNNs, as well as of conventional GNNs including different pooling strategies or replacing pointwise activation functions by localized activation functions.

On the practical side, this course comprises six labs involving applications of GNNs and GRNNs. These labs are based on recent papers that have successfully applied GNNs to recommendation systems, decentralized control via imitation and reinforcement learning, wireless networks and robotics. The latter three are ideas that are quickly gaining traction, but in which research opportunities are still aplenty.

You could also check the recordings for a tutorial that we did and ICASSP in the website of the Center on the Foundations of Information Processing at Penn (Finpenn).