In this course we will use the Python programming language, the PyTorch library for machine learning, and the Alelab Graph Neural Network Library. In Lab 3 we are going to explain how to use the Alelab GNN library. However, we will assume from the start that you are familiar with Python and that you have succeeded at installing Pytorch. You don’t need to be familiar with Pytorch. We will go over the use of this library in Labs 1 and 2.
For those of you that are not familiar with Python, and need to install Python and Pytorch, please refer to this guide:
Lab 1: Empirical Risk Minimization
We formulate Artificial Intelligence (AI) as the extraction of information from observations. In nature, observations and information are related by a probability distribution. An AI is a function that when given an input makes a prediction about the value that is likely to be the one that was generated by nature according to the distribution. Mathematically, this is formulated as the minimization of a risk that measures the difference between natural outputs and the output predicted by the AI. These risks can be statistical, when models are available, or empirical, when data is available but models are unknown.
Empirical risk minimization seems to do away with models. This is true to some extent, but not as true as we would like it to be.
Lab 2: Graph Filters and Neural Networks
This lab is our first approximation at learning with graph filters and graph neural networks (GNNs). You will learn how to train a graph filter and a GNN. You will also see evidence that the following three facts holds:
(F1) Graph filters produce better learning results than arbitrary linear parametrizations and GNNs produce better results than arbitrary (fully connected) neural networks.
(F2) GNNs work better than graph filters.
(F3) A GNN that is trained on a graph with a certain number of nodes can be executed in a graph with a larger number of nodes and still produce good rating estimates.
Facts (F1)-(F3) support advocacy for the use of GNNs. They also spark three interesting questions:
(Q1) Why do graph filters and GNNs outperform linear transformations and fully connected neural networks?
(Q2) Why do GNNs outperform graph filters?
(Q3) Why do GNNs transfer to networks with different number of nodes?
We will spend a sizable chunk of this course endeavoring to respond Questions (Q1)-(Q3).
Throughout the lab we use source localization as an example problem. This problems uses fake data that we generate so as to work in a controlled environment. We will soon be repeating this lab in a recommendation system using real data and will rediscover Facts (F1)-(F3) and reintroduce Questions (Q1)-(Q3). Working with real data is messier and better relegated to a second experience.
Lab 3: Recommendation Systems
In a recommendation system, we want to predict the ratings that customers would give to a certain product using the product’s rating history and the ratings that these customers have given to similar products. Collaborative filtering solutions build a graph of product similarities using past ratings and consider the ratings of individual customers as graph signals supported on the nodes of the product graph. The underlying assumption is that there exist an underlying set of true ratings or scores, but that we only observe a subset of those scores. The set of scores that are not observed can be estimated from the set of scores that have been observed. This problem can thus be seen as an ERM problem, and our goal will be compare the ability of several learning parametrizations to solve it.
Lab 4: Resource Allocation in Wireless Communication Networks
In a wireless communication system, information is transmitted in the form of an electromagnetic signal between a source and a destination. The feature that determines the quality of a communication link is the signal to noise ratio (SNR). In turn, the SNR of a channel is the result of multiplying the transmit power of the source by a propagation loss. This propagation loss is what we call a wireless channel. The problem of allocating resources in wireless communications is the problem of choosing transmit powers as a function of channel strength with the goal of optimizing some performance metric that is of interest to end users. Mathematically, the allocation of resources in wireless communications results in constrained optimization problems. Thus, in principle at least, all that is required to find an optimal allocation of resources is to find their solution. This is, however, impossible except in a few exceptional cases. In this lab we will explore the use of graph neural networks (GNNs) to find approximate solutions to the optimal allocation of resources in wireless communication systems.
Lab 5: Distributed Collaborative Systems
Graph neural networks (GNNs) explore the irregular structure of graph signals, and exhibit superior performance in various applications of recommendation systems, wireless networks and control. A key property GNNs inherit from graph filters is the distributed implementation. This property makes them suitable candidates for distributed learning over large-scale networks, where global information is not available at individual agents. Each agent must decide its own actions from local observations and communication with immediate neighbors. In this lab assignment, we focus on the distributed learning with graph neural networks.