Lecture 4

Lecture 4: Graph Neural Networks (9/28 – 10/2)

This lecture is devoted to the introduction of graph neural networks (GNNs). We start from graph filters and build graph perceptrons by adding compositions with pointwise nonlinearities. We stack graph perceptrons to construct GNNs. This simple GNN architectures are expanded with the use of filter banks, as well as multiple-input-multiple-output graph filters to produce multiple feature GNNs. Multiple feature GNNs are the workhorse of machine learning on graphs.

Handout.

Script.

Access full lecture playlist.

 

Video 4.1 – Learning with Graph Signals

We begin with some reminders about empirical risk minimization and introduce the problem of learning with graph signals. In order to learn with graph signals, we need to choose an appropriate parametrization. Graph Filters look like the right place to start our search.

 

 

• Covers Slides 1-5 in the handout.

 

Video 4.2 – Graph Neural Networks (GNNs)

This is the momentous video where we introduce GNNs. We extend graph filters into graph perceptrons by composition with pointwise nonlinearities. Stacking perceptrons in layers leads to the construction of GNNs.

 

 

• Covers Slides 6-11 in the handout.

 

Video 4.3 – Observations about GNNs

Several observations are in order. We discuss the similarity of graphs filters and GNNs, the transferability of GNNs across different graphs, and how to recover CNNs as particular cases of GNNs.

 

 

• Covers Slides 12-17 in the handout.

 

Video 4.4 – Fully Connected Neural Networks (FCNNs)

Among many interesting things we said about graph neural networks is that they generalize Convolutional neural networks. A somewhat converse perspective is that GNNs are particular cases of fully connected neural networks (FCNNs). FCNNs are introduced in this video.

 

 

• Covers Slides 18-23 in the handout.

 

Video 4.5 – GNNs vs FCNNs

Graph neural networks and fully connected neural networks have very similar architectures. They both use layers, which are composed of linear transformations and pointwise nonlinearities. The difference is that arbitrary neural networks utilize arbitrary linear transformations, whereas graph neural networks rely on graph filters. An important question is which of these two architectures we expect to work better.

 

 

• Covers Slides 24-28 in the handout.

 

Video 4.6 – Graph Filter Banks

Filters isolate signal features. When we consider problems where we foresee multiple features to be of interest, we use filter banks. We study filter banks in the GFT domain and explain how they are used to scatter energy across different potential signatures.

 

 

• Covers Slides 29-38 in the handout.

 

Video 4.7 – Multiple Feature GNNs

We defined GNNs by leveraging filters. Now that we have defined filter banks, we use them to define GNNs that process multiple features per layer.

 

 

• Covers Slides 39-46 in the handout.