Interpretability of node classification results

StellarGraph has support for inspecting several different algorithms for node classification to understand or interpret how they came to a decision. This folder contains demos of all of them to explain how they work and how to use them as part of a TensorFlow Keras data science workflow.

Interpreting a model involves training and making predictions for a model, and then analysing the predictions and the model to find which neighbours and which features had the largest influence on the prediction.

Find algorithms and demos

This table lists interpretability demos, including the algorithms used.

demo

algorithm(s)

GCN (dense)

GCN, Integrated Gradients

GCN (sparse)

GCN, Integrated Gradients

GAT

GAT, Integrated Gradients

See the root README or each algorithm’s documentation for the relevant citation(s). See the demo index for more tasks, and a summary of each algorithm. See the node classification demos for more details on the base task.