Execute this notebook: Download locally

Directed GraphSAGE with Neo4j

This example shows the application of directed GraphSAGE to a directed graph, where the in-node and out-node neighbourhoods are separately sampled and have different weights.

Subgraphs are sampled directly from Neo4j, which eliminate the need to store the whole graph structure in NetworkX. Node features, however, still need to be loaded in memory.

[3]:
import pandas as pd
import numpy as np
import os

import stellargraph as sg
from stellargraph.connector.neo4j import Neo4JDirectedGraphSAGENodeGenerator
from stellargraph.layer import DirectedGraphSAGE

from tensorflow.keras import layers, optimizers, losses, metrics, Model
from sklearn import preprocessing, feature_extraction, model_selection

import time
%matplotlib inline

Loading the CORA data from Neo4j

It is assumed that the cora dataset has already been loaded into Neo4j. This notebook demonstrates how to load cora dataset into Neo4j.

It is still required to load the node features into memory. We use py2neo, which provides tools to connect to Neo4j databases from Python applications. py2neo documentation could be found here.

[4]:
import py2neo

default_host = os.environ.get("STELLARGRAPH_NEO4J_HOST")

# Create the Neo4j Graph database object; the arguments can be edited to specify location and authentication
neo4j_graphdb = py2neo.Graph(host=default_host, port=None, user=None, password=None)
[5]:
def get_node_data_from_neo4j(neo4j_graphdb):
    fetch_node_query = "MATCH (node) RETURN id(node), properties(node)"
    # run the query
    node_records = neo4j_graphdb.run(fetch_node_query)
    # convert the node records into pandas dataframe
    return pd.DataFrame(node_records).rename(columns={0: "id", 1: "attr"})


start = time.time()
node_data = get_node_data_from_neo4j(neo4j_graphdb)
end = time.time()

print(f"{end - start:.2f} s: Loaded node data from neo4j database to memory")
7.74 s: Loaded node data from neo4j database to memory

Extract the node features which will be consumed by GraphSAGE.

[6]:
node_data = node_data.set_index("id")
attribute_df = node_data["attr"].apply(pd.Series)
node_data = attribute_df.drop(labels=["ID"], axis=1)
[7]:
node_features = pd.DataFrame(node_data["features"].values.tolist(), index=node_data.index)
node_features.head(5)
[7]:
0 1 2 3 4 5 6 7 8 9 ... 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432
id
0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 ... 0 0 1 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0

5 rows × 1433 columns

We aim to train a graph-ML model that will predict the “subject” attribute on the nodes. These subjects are one of 7 categories:

[8]:
labels = np.array(node_data["subject"])
set(labels)
[8]:
{'Case_Based',
 'Genetic_Algorithms',
 'Neural_Networks',
 'Probabilistic_Methods',
 'Reinforcement_Learning',
 'Rule_Learning',
 'Theory'}

Splitting the data

For machine learning we want to take a subset of the nodes for training, and use the rest for testing. We’ll use scikit-learn again to do this.

[9]:
train_data, test_data = model_selection.train_test_split(
    node_data, train_size=0.1, test_size=None, stratify=node_data["subject"]
)

Converting to numeric arrays

For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training.

[10]:
target_encoding = feature_extraction.DictVectorizer(sparse=False)

train_targets = target_encoding.fit_transform(train_data[["subject"]].to_dict("records"))
test_targets = target_encoding.transform(test_data[["subject"]].to_dict("records"))

Creating the GraphSAGE model in Keras

Now create a directed StellarGraph object storing the node data. Since the subgraph is sampled directly from Neo4j, we only need to retain the node features and do not need to store any edges.

[11]:
G = sg.StellarDiGraph(nodes={"paper": node_features}, edges={})
[12]:
print(G.info())
StellarDiGraph: Directed multigraph
 Nodes: 2708, Edges: 0

 Node types:
  paper: [2708]
    Features: float32 vector, length 1433
    Edge types: none

 Edge types:

To feed data from the graph to the Keras model we need a data generator that feeds data from the graph to the model. The generators are specialized to the model and the learning task so we choose the Neo4JDirectedGraphSAGENodeGenerator as we are predicting node attributes with a DirectedGraphSAGE model, sampling directly from Neo4j database.

We need two other parameters, the batch_size to use for training and the number of nodes to sample at each level of the model. Here we choose a two-level model with 10 nodes sampled in the first layer (5 in-nodes and 5 out-nodes), and 4 in the second layer (2 in-nodes and 2 out-nodes).

[13]:
batch_size = 50
in_samples = [5, 2]
out_samples = [5, 2]

A Neo4JDirectedGraphSAGENodeGenerator object is required to send the node features in sampled subgraphs to Keras.

[14]:
generator = Neo4JDirectedGraphSAGENodeGenerator(
    G, batch_size, in_samples, out_samples, neo4j_graphdb=neo4j_graphdb
)

Using the generator.flow() method, we can create iterators over nodes that should be used to train, validate, or evaluate the model. For training we use only the training nodes returned from our splitter and the target values. The shuffle=True argument is given to the flow method to improve training.

[15]:
train_gen = generator.flow(train_data.index, train_targets, shuffle=True)

Now we can specify our machine learning model, we need a few more parameters for this:

  • the layer_sizes is a list of hidden feature sizes of each layer in the model. In this example we use 32-dimensional hidden node features at each layer, which corresponds to 12 weights for each head node, 10 for each in-node and 10 for each out-node.
  • The bias and dropout are internal parameters of the model.
[16]:
graphsage_model = DirectedGraphSAGE(
    layer_sizes=[32, 32], generator=generator, bias=False, dropout=0.5,
)

Now we create a model to predict the 7 categories using Keras softmax layers.

[17]:
x_inp, x_out = graphsage_model.in_out_tensors()
prediction = layers.Dense(units=train_targets.shape[1], activation="softmax")(x_out)

Training the model

Now let’s create the actual Keras model with the graph inputs x_inp provided by the graph_model and outputs being the predictions from the softmax layer

[18]:
model = Model(inputs=x_inp, outputs=prediction)
model.compile(
    optimizer=optimizers.Adam(lr=0.005),
    loss=losses.categorical_crossentropy,
    metrics=["acc"],
)

Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the test set (we need to create another generator over the test data for this)

[19]:
test_gen = generator.flow(test_data.index, test_targets)
[20]:
history = model.fit(
    train_gen, epochs=20, validation_data=test_gen, verbose=2, shuffle=False
)
  ['...']
  ['...']
Train for 6 steps, validate for 49 steps
Epoch 1/20
6/6 - 19s - loss: 1.9219 - acc: 0.2037 - val_loss: 1.7854 - val_acc: 0.3954
Epoch 2/20
6/6 - 17s - loss: 1.7039 - acc: 0.4593 - val_loss: 1.6917 - val_acc: 0.4180
Epoch 3/20
6/6 - 16s - loss: 1.6037 - acc: 0.5148 - val_loss: 1.6101 - val_acc: 0.4705
Epoch 4/20
6/6 - 15s - loss: 1.4970 - acc: 0.6037 - val_loss: 1.5216 - val_acc: 0.5513
Epoch 5/20
6/6 - 10s - loss: 1.3823 - acc: 0.7333 - val_loss: 1.4318 - val_acc: 0.6304
Epoch 6/20
6/6 - 7s - loss: 1.2758 - acc: 0.7963 - val_loss: 1.3518 - val_acc: 0.6813
Epoch 7/20
6/6 - 7s - loss: 1.1964 - acc: 0.8407 - val_loss: 1.2790 - val_acc: 0.7047
Epoch 8/20
6/6 - 7s - loss: 1.0813 - acc: 0.8556 - val_loss: 1.2154 - val_acc: 0.7182
Epoch 9/20
6/6 - 7s - loss: 0.9830 - acc: 0.9074 - val_loss: 1.1496 - val_acc: 0.7313
Epoch 10/20
6/6 - 7s - loss: 0.9067 - acc: 0.9037 - val_loss: 1.0901 - val_acc: 0.7477
Epoch 11/20
6/6 - 7s - loss: 0.8347 - acc: 0.9185 - val_loss: 1.0489 - val_acc: 0.7506
Epoch 12/20
6/6 - 8s - loss: 0.7650 - acc: 0.9481 - val_loss: 1.0055 - val_acc: 0.7559
Epoch 13/20
6/6 - 7s - loss: 0.6984 - acc: 0.9556 - val_loss: 0.9626 - val_acc: 0.7629
Epoch 14/20
6/6 - 7s - loss: 0.6449 - acc: 0.9704 - val_loss: 0.9329 - val_acc: 0.7715
Epoch 15/20
6/6 - 7s - loss: 0.5915 - acc: 0.9778 - val_loss: 0.9083 - val_acc: 0.7654
Epoch 16/20
6/6 - 7s - loss: 0.5647 - acc: 0.9593 - val_loss: 0.8725 - val_acc: 0.7810
Epoch 17/20
6/6 - 7s - loss: 0.5161 - acc: 0.9630 - val_loss: 0.8562 - val_acc: 0.7769
Epoch 18/20
6/6 - 7s - loss: 0.4708 - acc: 0.9889 - val_loss: 0.8447 - val_acc: 0.7736
Epoch 19/20
6/6 - 7s - loss: 0.4557 - acc: 0.9778 - val_loss: 0.8155 - val_acc: 0.7879
Epoch 20/20
6/6 - 7s - loss: 0.4216 - acc: 0.9778 - val_loss: 0.8091 - val_acc: 0.7826
[21]:
sg.utils.plot_history(history)
../../../_images/demos_connector_neo4j_directed-graphsage-on-cora-neo4j-example_40_0.png

Now we have trained the model we can evaluate on the test set.

[22]:
test_metrics = model.evaluate(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
    print("\t{}: {:0.4f}".format(name, val))
  ['...']

Test Set Metrics:
        loss: 0.8150
        acc: 0.7826

Making predictions with the model

Now let’s get the predictions themselves for all nodes using another node iterator:

[23]:
all_nodes = node_data.index
all_mapper = generator.flow(all_nodes)
all_predictions = model.predict(all_mapper)

These predictions will be the output of the softmax layer, so to get final categories we’ll use the inverse_transform method of our target attribute specifcation to turn these values back to the original categories

[24]:
node_predictions = target_encoding.inverse_transform(all_predictions)

Let’s have a look at a few:

[25]:
results = pd.DataFrame(node_predictions, index=all_nodes).idxmax(axis=1)
df = pd.DataFrame({"Predicted": results, "True": node_data["subject"]})
df.head(10)
[25]:
Predicted True
id
0 subject=Neural_Networks Neural_Networks
1 subject=Rule_Learning Rule_Learning
2 subject=Reinforcement_Learning Reinforcement_Learning
3 subject=Reinforcement_Learning Reinforcement_Learning
4 subject=Probabilistic_Methods Probabilistic_Methods
5 subject=Probabilistic_Methods Probabilistic_Methods
6 subject=Reinforcement_Learning Theory
7 subject=Neural_Networks Neural_Networks
8 subject=Neural_Networks Neural_Networks
9 subject=Theory Theory

Please refer to directed-graphsage-node-classification.ipynb for node embedding visualization.

Execute this notebook: Download locally