Lecture 11

Lecture 11: Graph Recurrent Neural Networks

In this lecture, we will do learn yet another type of neural network architecture. In this case, we will go over recurrent neural networks, an architecture that is particularly useful when the data exhibits a time dependency. We will begin the lecture by going over machine learning on sequences and showing the limitations and problems that arise by not considering the time domain in out solution. Next, we will delve into Recurrent Neural Networks showing how they successfully exploit the history of the process. Then, we introduce the graph counterpart, the Graph Recurrent Neural Network (GRNN) and we present the stability theorem for them. We finish the lecture by introducing a numerical example in modeling an epidemic.

Handout.

Script.

Proof of stability of GRNNs.

Access full lecture playlist.

Video 11.1 – Machine Learning on Sequences

In this lecture, we will go over the problems that arise when we want to learn a sequence. The main idea in the lecture is that we can not store the entire history of states. This lecture starts by introducing a sequence and showing its time dependence. Then, shows how to make predictions from it, and finally how we need to be able to estimate its state without requiring unbounded memory growth.

• Covers Slides 1-5 in the handout.

Video 11.2 – Recurrent Neural Networks

In this lecture, we present the Recurrent Neural Networks (RNN), namely an information processing architecture that we use to learn processes that are not Markov. In other words, processes in which knowing the history of the process help in learning. The problem here is to predict based on data, but the real state of the system is unknown.

• Covers Slides 6-12 in the handout.

Video 11.3 – Time Gating

In this lecture, we will explore one of the flavors of RNN that is most common in practice. Due to the fact that we use backpropagation when training, the vanishing gradient problem arises. To solve it, we present several solutions in this lecture, one of them is the gating mechanism from which we explain the Long SHort-Term Memory (LSTM) and Gated Recurrent Unit (GRU).

• Covers Slides 13-19 in the handout.

Video 11.4 – Graph Recurrent Neural Networks

In this lecture, we present the Graph Recurrent Neural Networks. We define GRNN as particular cases of RNN in which the signals at each point in time are supported on a graph. In this lecture we will present how to construct a GRNN, going over each part of the architecture in detail.

• Covers Slides 20-24 in the handout.

Video 11.5 – Spatial Gating

In this lecture, we come back to the gating problem but in this case we consider the spatial gating one. We discuss long-range graph dependencies and the issue of vanishing/exploding gradients. We then introduce spatial gating strategies – namely node and edge gating – to address it.

• Covers Slides 25-32 in the handout.

Video 11.6 – Stability of GRNNs

In this lecture, we discuss the stability of GRNNs. In particular, we show that as GRNN can be seen as a time extension of traditional GNNs the inherit their stability properties. The proof is similar to others that we have seen in the class. The proof of stability for GRNNs can be found here.

• Covers Slides 33-38 in the handout.

Video 11.7 – Epidemic Modeling with GRNNs

In this lecture, we explore an application of GRNNs – an epidemic modeling – and compare them with GNNs and RNNs. To model the epidemic we will use a real-world dataset of a highschool in France. We will construct the graph by knowing their friendship network and we will model the epidemic spread with the SIR model. The numerical example shows that GRNN outperforms both RNNs and GNNs which is consistent with the fact that GRNN exploits both the spatial and the temporal structure of data.

• Covers Slides 39-45 in the handout.