有时,利用现有的GNN模型进行堆叠无法满足我们的需求,例如我们希望通过考虑节点重要性或边权值来发明一种聚合邻居信息的新方法。

本节将介绍:

  • DGL的消息传递API
  • 自己实现一个GraphSage卷积模型

消息传递GNN

DGL遵循消息传递范式,很多GNN模型往往都遵循下面的这个架构:
在这里插入图片描述
DGL 称 M ( l ) M^{(l)} M(l)为一个消息函数, ∑ \sum 是一个聚合函数, U ( l ) U^{(l)} U(l)是一个更新函数。

需要注意的是这里的 ∑ \sum 可以代表任意一个方法,而不仅仅是一个求和函数。

例如大名鼎鼎的GraphSage使用了下面的公式:
在这里插入图片描述
我们可以看出来消息传递是有方向的:消息从一个节点u传递到另一个节点v,与消息从节点v传递到节点u 不一定是一样的。

尽管DGL已经通过dgl.nn.SAGEConv内置了GraphSAGE,这里你依然可以通过自己来实现GraphSAGE

import dgl.function as fn

class SAGEConv(nn.Module):
    """Graph convolution module used by the GraphSAGE model.

    Parameters
    ----------
    in_feat : int
        Input feature size.
    out_feat : int
        Output feature size.
    """
    def __init__(self, in_feat, out_feat):
        super(SAGEConv, self).__init__()
        # A linear submodule for projecting the input and neighbor feature to the output.
        self.linear = nn.Linear(in_feat * 2, out_feat)

    def forward(self, g, h):
        """Forward computation

        Parameters
        ----------
        g : Graph
            The input graph.
        h : Tensor
            The input node feature.
        """
        with g.local_scope():
            g.ndata['h'] = h
            # update_all is a message passing API.
            g.update_all(message_func=fn.copy_u('h', 'm'), reduce_func=fn.mean('m', 'h_N'))
            h_N = g.ndata['h_N']
            h_total = torch.cat([h, h_N], dim=1)
            return self.linear(h_total)

这段代码的核心部分就是g.update_all方法,目的是对周围邻居特征进行聚合。

  • 消息传递方法fn.copy_u('h', 'm')的作用是复制节点属性h作为特征传递给邻居信息
  • 聚合方法fn.mean('m', 'h_N')会将收到的信息m进行平均,让后保存到新的属性中h_N
  • update_all告诉DGL触发所有节点和边的消息传递和信息聚合模块

然后,你可以对GraphSAGE进行堆叠来构建一个多层的GraphSAGE网络。

class Model(nn.Module):
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = SAGEConv(in_feats, h_feats)
        self.conv2 = SAGEConv(h_feats, num_classes)

    def forward(self, g, in_feat):
        h = self.conv1(g, in_feat)
        h = F.relu(h)
        h = self.conv2(g, h)
        return h

训练

下面的代码可以直接从之前的教程获得:

import dgl.data

dataset = dgl.data.CoraGraphDataset()
g = dataset[0]

def train(g, model):
    optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
    all_logits = []
    best_val_acc = 0
    best_test_acc = 0

    features = g.ndata['feat']
    labels = g.ndata['label']
    train_mask = g.ndata['train_mask']
    val_mask = g.ndata['val_mask']
    test_mask = g.ndata['test_mask']
    for e in range(200):
        # Forward
        logits = model(g, features)

        # Compute prediction
        pred = logits.argmax(1)

        # Compute loss
        # Note that we should only compute the losses of the nodes in the training set,
        # i.e. with train_mask 1.
        loss = F.cross_entropy(logits[train_mask], labels[train_mask])

        # Compute accuracy on training/validation/test
        train_acc = (pred[train_mask] == labels[train_mask]).float().mean()
        val_acc = (pred[val_mask] == labels[val_mask]).float().mean()
        test_acc = (pred[test_mask] == labels[test_mask]).float().mean()

        # Save the best validation accuracy and the corresponding test accuracy.
        if best_val_acc < val_acc:
            best_val_acc = val_acc
            best_test_acc = test_acc

        # Backward
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        all_logits.append(logits.detach())

        if e % 5 == 0:
            print('In epoch {}, loss: {:.3f}, val acc: {:.3f} (best {:.3f}), test acc: {:.3f} (best {:.3f})'.format(
                e, loss, val_acc, best_val_acc, test_acc, best_test_acc))

model = Model(g.ndata['feat'].shape[1], 16, dataset.num_classes)
train(g, model)

输出结果:

In epoch 0, loss: 1.949, val acc: 0.122 (best 0.122), test acc: 0.130 (best 0.130)
In epoch 5, loss: 1.872, val acc: 0.326 (best 0.326), test acc: 0.347 (best 0.347)
In epoch 10, loss: 1.740, val acc: 0.386 (best 0.386), test acc: 0.424 (best 0.424)
In epoch 15, loss: 1.545, val acc: 0.460 (best 0.460), test acc: 0.495 (best 0.495)
In epoch 20, loss: 1.291, val acc: 0.536 (best 0.536), test acc: 0.575 (best 0.575)
In epoch 25, loss: 0.993, val acc: 0.620 (best 0.620), test acc: 0.653 (best 0.653)
In epoch 30, loss: 0.691, val acc: 0.682 (best 0.682), test acc: 0.690 (best 0.690)
In epoch 35, loss: 0.435, val acc: 0.728 (best 0.728), test acc: 0.721 (best 0.721)
In epoch 40, loss: 0.255, val acc: 0.742 (best 0.742), test acc: 0.747 (best 0.747)
In epoch 45, loss: 0.145, val acc: 0.738 (best 0.742), test acc: 0.751 (best 0.747)
In epoch 50, loss: 0.084, val acc: 0.740 (best 0.742), test acc: 0.756 (best 0.747)
In epoch 55, loss: 0.051, val acc: 0.744 (best 0.746), test acc: 0.759 (best 0.759)
In epoch 60, loss: 0.034, val acc: 0.752 (best 0.752), test acc: 0.762 (best 0.762)
In epoch 65, loss: 0.024, val acc: 0.752 (best 0.752), test acc: 0.765 (best 0.762)
In epoch 70, loss: 0.018, val acc: 0.754 (best 0.754), test acc: 0.769 (best 0.767)
In epoch 75, loss: 0.014, val acc: 0.756 (best 0.756), test acc: 0.772 (best 0.772)
In epoch 80, loss: 0.012, val acc: 0.758 (best 0.758), test acc: 0.770 (best 0.772)
In epoch 85, loss: 0.010, val acc: 0.758 (best 0.758), test acc: 0.769 (best 0.772)
In epoch 90, loss: 0.009, val acc: 0.760 (best 0.760), test acc: 0.772 (best 0.770)
In epoch 95, loss: 0.008, val acc: 0.760 (best 0.760), test acc: 0.773 (best 0.770)
In epoch 100, loss: 0.007, val acc: 0.762 (best 0.762), test acc: 0.770 (best 0.772)
In epoch 105, loss: 0.007, val acc: 0.762 (best 0.762), test acc: 0.769 (best 0.772)
In epoch 110, loss: 0.006, val acc: 0.762 (best 0.762), test acc: 0.769 (best 0.772)
In epoch 115, loss: 0.006, val acc: 0.760 (best 0.762), test acc: 0.770 (best 0.772)
In epoch 120, loss: 0.005, val acc: 0.760 (best 0.762), test acc: 0.769 (best 0.772)
In epoch 125, loss: 0.005, val acc: 0.758 (best 0.762), test acc: 0.769 (best 0.772)
In epoch 130, loss: 0.005, val acc: 0.758 (best 0.762), test acc: 0.769 (best 0.772)
In epoch 135, loss: 0.004, val acc: 0.758 (best 0.762), test acc: 0.768 (best 0.772)
In epoch 140, loss: 0.004, val acc: 0.758 (best 0.762), test acc: 0.768 (best 0.772)
In epoch 145, loss: 0.004, val acc: 0.758 (best 0.762), test acc: 0.768 (best 0.772)
In epoch 150, loss: 0.004, val acc: 0.758 (best 0.762), test acc: 0.768 (best 0.772)
In epoch 155, loss: 0.004, val acc: 0.756 (best 0.762), test acc: 0.769 (best 0.772)
In epoch 160, loss: 0.003, val acc: 0.758 (best 0.762), test acc: 0.771 (best 0.772)
In epoch 165, loss: 0.003, val acc: 0.756 (best 0.762), test acc: 0.772 (best 0.772)
In epoch 170, loss: 0.003, val acc: 0.756 (best 0.762), test acc: 0.773 (best 0.772)
In epoch 175, loss: 0.003, val acc: 0.756 (best 0.762), test acc: 0.772 (best 0.772)
In epoch 180, loss: 0.003, val acc: 0.756 (best 0.762), test acc: 0.772 (best 0.772)
In epoch 185, loss: 0.003, val acc: 0.756 (best 0.762), test acc: 0.772 (best 0.772)
In epoch 190, loss: 0.003, val acc: 0.756 (best 0.762), test acc: 0.772 (best 0.772)
In epoch 195, loss: 0.002, val acc: 0.756 (best 0.762), test acc: 0.772 (best 0.772)

定制设置

在DGL中,我们在dgl.function包中提供了很多内置的消息传递与聚合的函数。利用这些函数可以构建一个自定义的卷积模块,例如下面的代码构建了一个SAGEConv,通过加权平均的方式来聚合邻居信息,同时消息传递中可以包含edata对应的边的信息。

class WeightedSAGEConv(nn.Module):
    """Graph convolution module used by the GraphSAGE model with edge weights.

    Parameters
    ----------
    in_feat : int
        Input feature size.
    out_feat : int
        Output feature size.
    """
    def __init__(self, in_feat, out_feat):
        super(WeightedSAGEConv, self).__init__()
        # A linear submodule for projecting the input and neighbor feature to the output.
        self.linear = nn.Linear(in_feat * 2, out_feat)

    def forward(self, g, h, w):
        """Forward computation

        Parameters
        ----------
        g : Graph
            The input graph.
        h : Tensor
            The input node feature.
        w : Tensor
            The edge weight.
        """
        with g.local_scope():
            g.ndata['h'] = h
            g.edata['w'] = w
            g.update_all(message_func=fn.u_mul_e('h', 'w', 'm'), reduce_func=fn.mean('m', 'h_N'))
            h_N = g.ndata['h_N']
            h_total = torch.cat([h, h_N], dim=1)
            return self.linear(h_total)

因为我们当前的DataSet没有边上的权重,我们需要手动添加一个全one的边权重,你可以根据具体情况进行设置:

class Model(nn.Module):
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = WeightedSAGEConv(in_feats, h_feats)
        self.conv2 = WeightedSAGEConv(h_feats, num_classes)

    def forward(self, g, in_feat):
        h = self.conv1(g, in_feat, torch.ones(g.num_edges(), 1).to(g.device))
        h = F.relu(h)
        h = self.conv2(g, h, torch.ones(g.num_edges(), 1).to(g.device))
        return h

model = Model(g.ndata['feat'].shape[1], 16, dataset.num_classes)
train(g, model)

输出如下:

In epoch 0, loss: 1.952, val acc: 0.102 (best 0.102), test acc: 0.082 (best 0.082)
In epoch 5, loss: 1.872, val acc: 0.206 (best 0.208), test acc: 0.212 (best 0.194)
In epoch 10, loss: 1.721, val acc: 0.428 (best 0.542), test acc: 0.449 (best 0.561)
In epoch 15, loss: 1.498, val acc: 0.424 (best 0.542), test acc: 0.439 (best 0.561)
In epoch 20, loss: 1.216, val acc: 0.552 (best 0.552), test acc: 0.560 (best 0.560)
In epoch 25, loss: 0.906, val acc: 0.656 (best 0.656), test acc: 0.655 (best 0.655)
In epoch 30, loss: 0.618, val acc: 0.696 (best 0.696), test acc: 0.717 (best 0.717)
In epoch 35, loss: 0.390, val acc: 0.722 (best 0.722), test acc: 0.741 (best 0.741)
In epoch 40, loss: 0.235, val acc: 0.718 (best 0.722), test acc: 0.748 (best 0.741)
In epoch 45, loss: 0.141, val acc: 0.722 (best 0.722), test acc: 0.755 (best 0.741)
In epoch 50, loss: 0.087, val acc: 0.730 (best 0.730), test acc: 0.757 (best 0.756)
In epoch 55, loss: 0.056, val acc: 0.728 (best 0.730), test acc: 0.761 (best 0.756)
In epoch 60, loss: 0.038, val acc: 0.728 (best 0.730), test acc: 0.758 (best 0.756)
In epoch 65, loss: 0.028, val acc: 0.728 (best 0.730), test acc: 0.756 (best 0.756)
In epoch 70, loss: 0.021, val acc: 0.732 (best 0.732), test acc: 0.756 (best 0.756)
In epoch 75, loss: 0.017, val acc: 0.732 (best 0.732), test acc: 0.756 (best 0.756)
In epoch 80, loss: 0.015, val acc: 0.732 (best 0.732), test acc: 0.753 (best 0.756)
In epoch 85, loss: 0.013, val acc: 0.732 (best 0.732), test acc: 0.753 (best 0.756)
In epoch 90, loss: 0.011, val acc: 0.732 (best 0.734), test acc: 0.754 (best 0.754)
In epoch 95, loss: 0.010, val acc: 0.732 (best 0.734), test acc: 0.754 (best 0.754)
In epoch 100, loss: 0.009, val acc: 0.734 (best 0.734), test acc: 0.753 (best 0.754)
In epoch 105, loss: 0.008, val acc: 0.732 (best 0.734), test acc: 0.754 (best 0.754)
In epoch 110, loss: 0.008, val acc: 0.732 (best 0.734), test acc: 0.754 (best 0.754)
In epoch 115, loss: 0.007, val acc: 0.736 (best 0.736), test acc: 0.753 (best 0.753)
In epoch 120, loss: 0.006, val acc: 0.736 (best 0.736), test acc: 0.754 (best 0.753)
In epoch 125, loss: 0.006, val acc: 0.736 (best 0.736), test acc: 0.755 (best 0.753)
In epoch 130, loss: 0.006, val acc: 0.736 (best 0.736), test acc: 0.756 (best 0.753)
In epoch 135, loss: 0.005, val acc: 0.736 (best 0.736), test acc: 0.756 (best 0.753)
In epoch 140, loss: 0.005, val acc: 0.736 (best 0.736), test acc: 0.756 (best 0.753)
In epoch 145, loss: 0.005, val acc: 0.738 (best 0.738), test acc: 0.756 (best 0.756)
In epoch 150, loss: 0.004, val acc: 0.738 (best 0.738), test acc: 0.757 (best 0.756)
In epoch 155, loss: 0.004, val acc: 0.738 (best 0.738), test acc: 0.758 (best 0.756)
In epoch 160, loss: 0.004, val acc: 0.738 (best 0.738), test acc: 0.758 (best 0.756)
In epoch 165, loss: 0.004, val acc: 0.738 (best 0.738), test acc: 0.757 (best 0.756)
In epoch 170, loss: 0.004, val acc: 0.738 (best 0.738), test acc: 0.757 (best 0.756)
In epoch 175, loss: 0.003, val acc: 0.738 (best 0.738), test acc: 0.758 (best 0.756)
In epoch 180, loss: 0.003, val acc: 0.738 (best 0.738), test acc: 0.758 (best 0.756)
In epoch 185, loss: 0.003, val acc: 0.738 (best 0.738), test acc: 0.758 (best 0.756)
In epoch 190, loss: 0.003, val acc: 0.738 (best 0.738), test acc: 0.758 (best 0.756)
In epoch 195, loss: 0.003, val acc: 0.740 (best 0.740), test acc: 0.758 (best 0.758)

Process finished with exit code 0

Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐