# Lab 4: Resource Allocation in Wireless Communication Networks (10/19-11/6)

In a wireless communication system, information is transmitted in the form of an electromagnetic signal between a source and a destination. The feature that determines the quality of a communication link is the signal to noise ratio (SNR). In turn, the SNR of a channel is the result of multiplying the transmit power of the source by a propagation loss. This propagation loss is what we call a wireless channel. The problem of allocating resources in wireless communications is the problem of choosing transmit powers as a function of channel strength with the goal of optimizing some performance metric that is of interest to end users. Mathematically, the allocation of resources in wireless communications results in constrained optimization problems. Thus, in principle at least, all that is required to find an optimal allocation of resources is to find their solution. This is, however, impossible except in a few exceptional cases. In this lab we will explore the use of graph neural networks (GNNs) to find approximate solutions to the optimal allocation of resources in wireless communication systems.

### 1.1-1.2 Channel class and testing

To evaluate pathloss as equation (6) indicates, we create the Python function $\p{pathloss}$:


import numpy as np
import matplotlib.pyplot as plt
import torch
from scipy.spatial import distance_matrix
from tqdm import tqdm

def pathloss(d, d0, gamma):
"""
Calculate simplified path-loss model.

Args:
d: The distance. Can be a matrix - this is an elementwise operation.
d0: Reference distance.
gamma: Path loss exponent.

Returns: Pathloss value.

"""
return (d0 / d) ** gamma


Test the above function with $d$ varying between $1m$ and $100m$. Plot the pathloss with linear and logarithmic scale.

Eh = np.zeros(100)
dist = np.arange(100)
Eh = pathloss(dist + 1, 1, 2.2)
# # Plot in linear scale
plt.figure()
plt.plot(Eh)
plt.ylabel('Pathloss')
plt.xlabel('Distance')
plt.savefig('Pathloss.pdf')
plt.show()
# # Plot in logarithmic scale
plt.figure()
plt.plot(np.log(dist), np.log(Eh) )
plt.ylabel('log of Pathloss')
plt.xlabel('log of Distance')
plt.savefig('log-of-Pathloss.pdf')
plt.show()


### 1.3-1.4 Fading channel class and testing

Generate channel $h$ based on equation (7) and (8) as:

def fading_channel(d, d0, gamma, s, Q):

Eh = (d0 / d) ** gamma
h_til = np.random.exponential(s, size=(1, Q))
h = Eh * h_til / s
return h


Test and plot with the distance between $1m$ and $100m$:

Q = 100
dist = np.arange(100)
h_sim = np.zeros((100, Q))

for dis in range(100):
h_sim[dis,:] = fading_channel(dis+1, 1, 2.2, 2, 100)

h_mean = np.mean( h_sim, axis = 1)
h_var = np.var( h_sim, axis = 1 )
plt.figure()
plt.errorbar(dist, h_mean, h_var)
plt.ylabel('h')
plt.xlabel('Distance')
plt.show()


### 1.5-1.6 Capacity method and testing

Calculate capacity with equation (9):

def build_fading_capacity_channel(h, p):
N0 = 1e-6
cap = np.log(1 + h * p / N0)

return cap


Test and plot with the distance between $1m$ and $100m$:

cap = build_fading_capacity_channel(h_sim, 0.05)
cap_mean = np.mean( cap, axis = 1)
cap_var = np.var( cap, axis = 1 )
dist = np.arange(100)
print(h_var)
plt.figure()
plt.errorbar(dist, cap_mean, cap_var)
plt.ylabel('Capacity')
plt.xlabel('Distance')
plt.show()


### 2.1-2.2 Positioning and pathloss for wireless networks

First we define a class with dimensions and the number of nodes as input parameters.

def positions(n, wx, wy, wc):
t_pos = np.hstack(
(np.random.uniform(0, wx, (n, 1)), np.random.uniform(0, wy, (n, 1))))
r_distance = np.random.uniform(0, wc, (n, 1))
r_angle = np.random.uniform(0, 2 * np.pi, (n, 1))
r_rel_pos = r_distance * np.hstack((np.cos(r_angle), np.sin(r_angle)))
r_pos = t_pos + r_rel_pos
return t_pos, r_pos


Calculate the pathloss values based on the calculated distance matrix:

def pathloss_matrix(n, wx, wy, wc, d0, gamma):
transmitter_pos, receiver_pos = positions(n, wx, wy, wc)
D = distance_matrix(transmitter_pos, receiver_pos)
return pathloss(D, d0, gamma)


### 2.3-2.4 Fading and capacity calculation

Generate channel matrix based on pathloss and fading values. We also define $\p{normalize_matrix}$ to help with normalizing channel matrix. Calculate channel capacity with matrix:

def channel_matrix(n, wx, wy, wc, d0, gamma, s, Q):
H_1 = pathloss_matrix(n, wx, wy, wc, d0, gamma)
H_2 = np.random.exponential(s, (Q, n, n)) /s
H = H_1 * H_2
return H

def normalize_matrix(A):
eigenvalues = np.linalg.eigvals(A)
return A / np.max(eigenvalues.real)

def capacity(p, H, N0):
num = torch.diagonal(H, dim1=-2, dim2=-1) * p
den = H.matmul(p.unsqueeze(-1)).squeeze() - num + N0
return torch.log(1 + num / den)



### 3.1 Unconstrained training

The input to GNN in this application is a graph with edges generated from a random distribution. Each training iteration we need to generate a random graph structure. Therefore, we first construct a generator class

class Generator:
def __init__(self, n, wx, wy, wc, d0=1, gamma=2.2, s=2, N0=1, device="cpu", batch_size=64, random=False):
# number of nodes in the network
self.n = n
# dimensions of the network
self.wx = wx
self.wy = wy
self.wc = wc
# parameters for pathloss
self.d0 = d0
self.gamma = gamma
self.s = s
# noise
self.N0 = N0
self.device = device
self.batch_size = batch_size
self.random = random
self.train = None
self.test = None
# pathloss matrix
self.H1 = None
self.generate_pathloss()

def generate_pathloss(self):
self.H1 = pathloss_matrix(self.n, self.wx, self.wy, self.wc, self.d0, self.gamma)

def __next__(self):
if self.random:
self.generate_pathloss()
H2 = np.random.exponential(self.s, (self.batch_size, self.n, self.n))
# generate random channel matrix
H = self.H1 * H2
# normalization of the channel matrix
S = normalize_matrix(H)
H = torch.from_numpy(H).to(torch.float).to(self.device)
S = torch.from_numpy(S).to(torch.float).to(self.device)
return H, S



This can be stored in $\p{wireless.py}$ file to deal with the wireless network measurements in each iteration. Furthermore, we need to define a $\p{train}$ and $\p{test}$ function to help evaluate the strategy. For the $\p{train}$ function, we take the GNN model, the update rules and the network generator as inputs.

def train(model, update, generator, iterations):
pbar = tqdm(range(iterations))
for i in pbar:
# for each iteration, generate a new random channel matrix
H, S = next(generator)
# get the correspondent allocation strategy
p = model(S)
# calculate the capacity as the performance under this allocation strategy
c = capacity(p, H, generator.N0)
# update the parameters of the model
update(p, c)

pbar.set_postfix({"capacity": c.mean().item(), "power": p.mean().item()})



For the $\p{test}$ function, we do not need to update the model but just verify with random generated networks and output the performance.


def test(model: Callable[[torch.Tensor], torch.Tensor], generator: Generator, iterations=100):
powers = []
capacities = []
for i in tqdm(range(iterations), desc=f"Test for n={generator.n}"):
H, S = next(generator)
p = model(S)
c = capacity(p, H, generator.N0)

capacities.append(c.mean().item())
powers.append(p.mean().item())

print(f"Capacity mean: {np.mean(capacities):.4e}, variance {np.var(capacities):.4e} "
f"| Power mean: {np.mean(powers):.2f} | variance {np.var(powers):.2f}")



With the GSO changing each iteration, we also need to define a new GNN class to take graph signal $\bbx$ and $\bbH$ both as inputs.

We begin with define the graph filter class in the $\p{graph.py}$ file.


from typing import List, Union, Tuple

import torch
from torch import nn

class GraphFilter(nn.Module):
def __init__(self, k: int, f_in=1, f_out=1, f_edge=1):
"""
A graph filter layer.
Args:
gso: The graph shift operator.
k: The number of filter taps.
f_in: The number of input features.
f_out: The number of output features.
"""
super().__init__()
self.k = k
self.f_in = f_in
self.f_out = f_out
self.f_edge = f_edge

self.weight = nn.Parameter(torch.ones(self.f_out, self.f_edge, self.k, self.f_in))
self.bias = nn.Parameter(torch.zeros(self.f_out, 1))
torch.nn.init.normal_(self.weight, 0.3, 0.1)
torch.nn.init.zeros_(self.bias)

def to(self, *args, **kwargs):
# Only the filter taps and the weights are registered as
# parameters, so we need to move gsos ourselves.

self.weight = self.gso.to(*args, **kwargs)
self.bias = self.gso.to(*args, **kwargs)
return self

def forward(self, x: torch.Tensor, S: torch.Tensor):
batch_size = x.shape[0]

B = batch_size
E = self.f_edge
F = self.f_out
G = self.f_in
N = S.shape[-1]  # number of nodes
K = self.k   # number of filter taps

h = self.weight
b = self.bias

# Now, we have x in B x G x N and S in E x N x N, and we want to come up
# with matrix multiplication that yields z = x * S with shape
# B x E x K x G x N.
# For this, we first add the corresponding dimensions
x = x.reshape([B, 1, G, N])
S = S.reshape([batch_size, E, N, N])
z = x.reshape([B, 1, 1, G, N]).repeat(1, E, 1, 1, 1)  # This is for k = 0
# We need to repeat along the E dimension, because for k=0, S_{e} = I for
# all e, and therefore, the same signal values have to be used along all
# edge feature dimensions.
for k in range(1, K):
x = torch.matmul(x, S)  # B x E x G x N
xS = x.reshape([B, E, 1, G, N])  # B x E x 1 x G x N
z = torch.cat((z, xS), dim=2)  # B x E x k x G x N
# This output z is of size B x E x K x G x N
# Now we have the x*S_{e}^{k} product, and we need to multiply with the
# filter taps.
# We multiply z on the left, and h on the right, the output is to be
# B x N x F (the multiplication is not along the N dimension), so we reshape
# z to be B x N x E x K x G and reshape it to B x N x EKG (remember we
# always reshape the last dimensions), and then make h be E x K x G x F and
# reshape it to EKG x F, and then multiply
y = torch.matmul(z.permute(0, 4, 1, 2, 3).reshape([B, N, E * K * G]),
h.reshape([F, E * K * G]).permute(1, 0)).permute(0, 2, 1)
# And permute again to bring it from B x N x F to B x F x N.
# Finally, add the bias
if b is not None:
y = y + b
return y



Based on the above definitions, we can further define the GNN structure.


class GraphNeuralNetwork(nn.Module):
def __init__(self,
ks: Union[List[int], Tuple[int]] = (5,),
fs: Union[List[int], Tuple[int]] = (1, 1)):
"""
An L-layer graph neural network. Uses ReLU activation for each layer except the last, which has no activation.

Args:
gso: The graph shift operator.
ks: [K_1,...,K_L]. On ith layer, K_{i} is the number of filter taps.
fs: [F_1,...,F_L]. On ith layer, F_{i} and F_{i+1} are the number of input and output features,
respectively.
"""
super().__init__()
self.n_layers = len(ks)

self.layers = []
for i in range(self.n_layers):
f_in = fs[i]
f_out = fs[i + 1]
k = ks[i]
gfl = GraphFilter(k, f_in, f_out)
activation = torch.nn.ReLU() if i < self.n_layers - 1 else torch.nn.Identity()
self.layers += [gfl, activation]
self.add_module(f"gfl{i}", gfl)
self.add_module(f"activation{i}", activation)

def forward(self, x, S):
for i, layer in enumerate(self.layers):
x = layer(x, S) if i % 2 == 0 else layer(x)
return x



With the wireless network and GNN structured, in the $\p{main.py}$ file we can first implement the wireless network generation function and build up the networks. The input dimensions are $w_x=200m$, $w_y=100m$, $w_c=50m$. The number of nodes $n$ is calculated with $\rho=\frac{n}{w_x w_y}=0.005$. The wireless network then can be generated as:

import torch
import wireless
from graph import GraphNeuralNetwork

N0 = 1e-6
pmax = 1e-3
train_iterations = 200
test_iterations = 100
batch_size = 100

device = "cuda:0" if torch.cuda.is_available() else "cpu"

#Data generation (saves a log of time)
generator_small = wireless.Generator(100, 200, 100, 50, device=device, N0=N0, batch_size=batch_size)



Then we customize the GNN model and define a class:

class Model(torch.nn.Module):
def __init__(self, gnn):
super().__init__()
self.gnn = gnn

def forward(self, S):
batch_size = S.shape[0]
n = S.shape[1]
p0 = torch.ones(batch_size, n, device=device)
p = self.gnn(p0, S).abs()
return torch.squeeze(p)



We need to specify the loss function of this unconstrained learning as:

\begin{align}\label{eqn_unconstrained_loss}
\sum\limits_{i=1}^n \mathbb{E}[f_i(\Phi(\bbH,\bbA), \bbH)] - \mu_i\mathbb{E}[\Phi_i(\bbH,\bbA)].
\end{align}

Because of the unknown of distribution $m(\bbH)$, the expectation cannot be calculated explicily. We approximate the expectation value with the average of a batch of data. As $n$ is determined here, we use the mean value as evaluations rather than sum values.

mu_unconstrained = 0.01  ## importance weights set as 0.01
step_size = 0.001

unconstrained = Model(GraphNeuralNetwork([5, 5, 5, 5, 5], [1, 1, 1, 1, 1, 1]).to(device)) #GNN build-up
optimizer = torch.optim.Adam(unconstrained.parameters(), step_size)

def update_unconstrained(p, c):
global mu, optimizer
optimizer.zero_grad()
objective = -c.mean() + mu_unconstrained * p.mean()  # Specify loss function
objective.backward()
optimizer.step()



We can then input the unconstrained learning GNN model, the updating rule and the generator defined above to train and test the GNN.

wireless.train(unconstrained, update_unconstrained, generator_small, train_iterations)
wireless.test(unconstrained, generator_small, test_iterations)


### 3.2 Stability

We verify the trained GNN in 3.1 by testing on 1,000 other networks with the same parameters. Here other networks indicate different positioning of transmitters and receivers. We realize this by setting the input parameter $\p{random}$ of function $\p{wireless.Generator}$ as $\p{true}$. This makes the generator produces different $100-$node wireless networks with a number of $\p{batch\_size}$. The generator therefore needs to be called in the $\p{main.py}$ file as:

generator_random = wireless.Generator(100, 200, 100, 50, device=device, N0=N0, batch_size=batch_size, random=True)
wireless.test(unconstrained, generator_random, test_iterations)


### 3.3 Transference

We verify the transference by generating another set of larger networks. With $\rho$ keeping the same, the number of nodes in the newly generated setting is $n=400$.

generator_large = wireless.Generator(400, 400, 200, 50, device=device, N0=N0, batch_size=batch_size, random=True)
wireless.test(unconstrained, generator_large, test_iterations)


### 4.1 Constrained training

Compared with the unconstrained training, the difference for constrained training mainly lies in the updating process. Instead of operating SGD once, we need to carry out gradient ascent on the primal variable $\bbA$ as equation (26) and gradient descent on the dual variable $\mu$ as equation (25).

def update_constrained(p, c):
global mu, optimizer
optimizer.zero_grad()

# primal step
(mu.unsqueeze(0) * (p - pmax) - c).mean().backward()
optimizer.step()
# dual step
with torch.no_grad():
mu = torch.relu(mu + dual_step * torch.mean((p - pmax), 0))


Then we need to set up the parameters needed in this setting, we set the primal update stepsize as $0.01$ and dual stepsize as $0.001$. The weight vector $\mathbf{\mu}$ is initialized to be all zero vector.

primal_step = 0.01
dual_step = 0.001

mu = torch.zeros(generator_small.n, device=device)
constrained = Model(GraphNeuralNetwork([5, 5, 5, 5, 5], [1, 1, 1, 1, 1, 1]).to(device))
optimizer = torch.optim.Adam(constrained.parameters(), primal_step)


The implementation of constrained learning is similar to the steps of unconstrained learning:

wireless.train(constrained, update_constrained, generator_small, train_iterations)
wireless.test(constrained, generator_small, test_iterations)
wireless.test(constrained, generator_random, test_iterations)
wireless.test(constrained, generator_large, test_iterations)