Lecture 1

Lecture 1: Machine Learning on Graphs (9/5 – 9/8)

Graph Neural Networks (GNNs) are tools with broad applicability and very interesting properties. There is a lot that can be done with them and a lot to learn about them. In this first lecture we go over the goals of the course and explain the reason why we should care about GNNs. We also offer a preview of what is to come. We discuss the importance of leveraging structure in scalable learning and how convolutions do that for signals in Euclidean space. We further explain how to generalize convolutions to graphs and the consequent generalization of convolutional neural networks to graph (convolutional) neural networks.

Handout.

Script.

Access full lecture playlist.

Video 1.1 – Graph Neural Networks

There are two objectives that I expect we can accomplish together in this course. You will learn how to use GNNs in practical applications. That is, you will develop the ability to formulate machine learning problems on graphs using Graph neural networks. You will learn to train them. And You will learn to evaluate them.But you will also learn that you cannot use them blindly. You will learn the fundamental principles that explain their good empirical performance. This knowledge will allow you to identify cases where GNN are applicable or not.

• Covers Slides 1-5 in the handout.

Video 1.2 – Machine Learning on Graphs: The Why

We care about GNNs because they enable machine learning on graphs. But why should we care about machine learning on graphs? We dwell here on the whys of machine learning on graphs. Why is it interesting? Why do we care? The reason we care is simple: Because graphs are pervasive in information processing.

• Covers Slides 6-10 in the handout.

Video 1.3 – Machine Learning on Graphs: The How

Having discussed the why, we tackle the how. How do we do machine learning on graphs? The answer to this question is pretty easy: We should use a neural network. We should do this, because we have overwhelming empirical and theoretical evidence for the value of neural networks. Understanding this evidence is one of the objectives of this course. But before we are ready to do that, there is a dealbreaker challenge potentially lurking in the shadows: Neural Networks must exploit structure to be scalable.

• Covers Slides 11-13 in the handout.

Video 1.4 – Convolutions in Time, in Space, and on Graphs

Our intellectual path towards scalable machine learning on graphs begins from the construction of generalizations of the convolution operator to signals supported on graphs. We will build this generalization by observing that even though we do not often think of them as such, convolutions are operations on graphs.

• Covers Slides 14-19 in the handout.

Video 1.5 – Convolutional and Graph Neural Networks

To enable machine learning on graphs, we constructed an intellectual roadmap that began with a generalisation of convolutions to graphs and continued with a generalization of convolutional neural networks to graph neural networks. We have completed the first part of the roadmap. The second part of the roadmap is easier because CNNs and GNNs are minor variations of linear convolutional filters. We just need to add pointwise nonlinearities and compositions of several layers.

• Covers Slides 20-26 in the handout.

Video 1.6 – The Road Ahead

In today’s lecture I told you a lot about architectures. I defined convolutions in time and convolutions on graphs. And I explained how these convolutions can be used to construct CNNs and GNNs, which are the basis for scalable machine learning. This was just to give you a taste. There is a lot more that we will investigate in this course.

• Covers Slides 27-31 in the handout.