StellarGraph API

Core

This contains the core objects used by the StellarGraph library.

class stellargraph.core.StellarGraph(nodes=None, edges=None, *, is_directed=False, source_column='source', target_column='target', edge_weight_column='weight', node_type_default='default', edge_type_default='default', dtype='float32', graph=None, node_type_name='label', edge_type_name='label', edge_weight_label=None, node_features=None)[source]

StellarGraph class for graph machine learning.

Summary of a StellarGraph and the terminology used:

  • it stores graph structure, as a collection of nodes and a collection of edges that connect a source node to a target node
  • each node and edge has an associated type
  • each node has a numeric vector of features, and the vectors of all nodes with the same type have the same dimension
  • it is homogeneous if there is only one type of node and one type of edge
  • it is heterogeneous if it is not homgeneous (more than one type of node, or more than one type of edge)
  • it is directed if the direction of an edge starting at its source node and finishing at its target node is important
  • it is undirected if the direction does not matter
  • every StellarGraph can be a multigraph, meaning there can be multiple edges between any two nodes

To create a StellarGraph object, at a minimum pass the nodes and edges as Pandas DataFrames. Each row of the nodes DataFrame represents a node in the graph, where the index is the ID of the node. Each row of the edges DataFrame represents an edge, where the index is the ID of the edge, and the source and target columns store the node ID of the source and target nodes.

For example, suppose we’re modelling a graph that’s a square with a diagonal:

a -- b
| \  |
|  \ |
d -- c

The DataFrames might look like:

nodes = pd.DataFrame([], index=["a", "b", "c", "d"])
edges = pd.DataFrame(
    {"source": ["a", "b", "c", "d", "a"], "target": ["b", "c", "d", "a", "c"]}
)

If this data represents an undirected graph (the ordering of each edge source/target doesn’t matter):

Gs = StellarGraph(nodes, edges)

If this data represents a directed graph (the ordering does matter):

Gs = StellarDiGraph(nodes, edges)

Numeric node features are taken as any columns of the nodes DataFrame. For example, if the graph above has two features x and y associated with each node:

nodes = pd.DataFrame(
    {"x": [-1, 2, -3, 4], "y": [0.4, 0.1, 0.9, 0]}, index=["a", "b", "c", "d"]
)

Edge weights are taken as the optional weight column of the edges DataFrame:

edges = pd.DataFrame({
    "source": ["a", "b", "c", "d", "a"],
    "target": ["b", "c", "d", "a", "c"],
    "weight": [10, 0.5, 1, 3, 13]
})

Heterogeneous graphs, with multiple node or edge types, can be created by passing multiple DataFrames in a dictionary. The dictionary keys are the names/identifiers for the type. For example, if the graph above has node a of type foo, and the rest as type bar, the construction might look like:

foo_nodes = pd.DataFrame({"x": [-1]}, index=["a"])
bar_nodes = pd.DataFrame(
    {"y": [0.4, 0.1, 0.9], "z": [100, 200, 300]}, index=["b", "c", "d"]
)

StellarGraph({"foo": foo_nodes, "bar": bar_nodes}, edges)

Notice the foo node has one feature x, while the bar nodes have 2 features y and z. A heterogeneous graph can have different features for each type.

Edges of different types work in the same way. example instance, if edges have different types based on their orientation:

horizontal_edges = pd.DataFrame(
    {"source": ["a", "c"], "target": ["b", "d"]}, index=[0, 2]
)
vertical_edges = pd.DataFrame(
    {"source": ["b", "d"], "target": ["c", "a"]}, index=[1, 3]
)
diagonal_edges = pd.DataFrame({"source": ["a"], "target": ["c"]}, index=[4])

StellarGraph(nodes, {"h": horizontal_edges, "v": vertical_edges, "d": diagonal_edges})

A dictionary can be passed for both arguments:

StellarGraph(
    {"foo": foo_nodes, "bar": bar_nodes},
    {"h": horizontal_edges, "v": vertical_edges, "d": diagonal_edges}
)

Note

The IDs of nodes must be unique across all types: for example, it is an error to have a node 0 of type a, and a node 0 of type b. IDs of edges must also be unique across all types.

See also

from_networkx() for construction from a NetworkX graph.

Parameters:
  • nodes (DataFrame or dict of hashable to Pandas DataFrame, optional) – Features for every node in the graph. Any columns in the DataFrame are taken as numeric node features of type dtype. If there is only one type of node, a DataFrame can be passed directly, and the type defaults to the node_type_default parameter. Nodes have an ID taken from the index of the dataframe, and they have to be unique across all types. For nodes with no features, an appropriate DataFrame can be created with pandas.DataFrame([], index=node_ids), where node_ids is a list of the node IDs.
  • edges (DataFrame or dict of hashable to Pandas DataFrame, optional) – An edge list for each type of edges as a Pandas DataFrame containing a source, target and (optionally) weight column (the names of each are taken from the source_column, target_column and edge_weight_column parameters). If there is only one type of edges, a DataFrame can be passed directly, and the type defaults to the edge_type_default parameter. Edges have an ID taken from the index of the dataframe, and they have to be unique across all types.
  • is_directed (bool, optional) – If True, the data represents a directed multigraph, otherwise an undirected multigraph.
  • source_column (str, optional) – The name of the column to use as the source node of edges in the edges edge list argument.
  • target_column (str, optional) – The name of the column to use as the target node of edges in the edges edge list argument.
  • edge_weight_column (str, optional) – The name of the column in each of the edges DataFrames to use as the weight of edges. If the column does not exist in any of them, it is defaulted to 1.
  • node_type_default (str, optional) – The default node type to use, if nodes is passed as a DataFrame (not a dict).
  • edge_type_default (str, optional) – The default edge type to use, if edges is passed as a DataFrame (not a dict).
  • dtype (numpy data-type, optional) – The numpy data-type to use for the features extracted from each of the nodes DataFrames.
  • graph – Deprecated, use from_networkx().
  • node_type_name – Deprecated, use from_networkx().
  • edge_type_name – Deprecated, use from_networkx().
  • edge_weight_label – Deprecated, use from_networkx().
  • node_features – Deprecated, use from_networkx().
check_graph_for_ml(features=True)[source]

Checks if all properties required for machine learning training/inference are set up. An error will be raised if the graph is not correctly setup.

create_graph_schema(nodes=None)[source]

Create graph schema in dict of dict format from current graph.

Note the assumption we make that there is only one edge of a particular edge type per node pair.

This means that specifying an edge by node0, node1 and edge type is unique.

Parameters:nodes (list) – A list of node IDs to use to build schema. This must represent all node types and all edge types in the graph. If not specified, all nodes and edges in the graph are used.
Returns:GraphSchema object.
edges(include_edge_type=False, include_edge_weight=False) → Iterable[Any][source]

Obtains the collection of edges in the graph.

Parameters:
  • include_edge_type (bool) – A flag that indicates whether to return edge types
  • format (of) –
  • include_edge_weight (bool) – A flag that indicates whether to return edge weights.
  • are returned in a separate list. (Weights) –
Returns:

The graph edges. If edge weights are included then a tuple of (edges, weights)

static from_networkx(graph, *, edge_weight_attr='weight', node_type_attr='label', edge_type_attr='label', node_type_default='default', edge_type_default='default', node_features=None, dtype='float32', node_type_name=None, edge_type_name=None, edge_weight_label=None)[source]

Construct a StellarGraph object from a NetworkX graph:

Gs = StellarGraph.from_networkx(nx_graph)

To create a StellarGraph object with node features, supply the features as a numeric feature vector for each node.

To take the feature vectors from a node attribute in the original NetworkX graph, supply the attribute name to the node_features argument:

Gs = StellarGraph.from_networkx(nx_graph, node_features="feature")

where the nx_graph contains nodes that have a “feature” attribute containing the feature vector for the node. All nodes of the same type must have the same size feature vectors.

Alternatively, supply the node features as Pandas DataFrame objects with the index of the DataFrame set to the node IDs. For graphs with a single node type, you can supply the DataFrame object directly to StellarGraph:

node_data = pd.DataFrame(
    [feature_vector_1, feature_vector_2, ..],
    index=[node_id_1, node_id_2, ...])
Gs = StellarGraph.from_networkx(nx_graph, node_features=node_data)

For graphs with multiple node types, provide the node features as Pandas DataFrames for each type separately, as a dictionary by node type. This allows node features to have different sizes for each node type:

node_data = {
    node_type_1: pd.DataFrame(...),
    node_type_2: pd.DataFrame(...),
}
Gs = StellarGraph.from_networkx(nx_graph, node_features=node_data)

You can also supply the node feature vectors as an iterator of node_id and feature vector pairs, for graphs with single and multiple node types:

node_data = zip([node_id_1, node_id_2, ...],
    [feature_vector_1, feature_vector_2, ..])
Gs = StellarGraph.from_networkx(nx_graph, node_features=node_data)
Parameters:
  • graph – The NetworkX graph instance.
  • node_type_attr (str, optional) – This is the name for the node types that StellarGraph uses when processing heterogeneous graphs. StellarGraph will look for this attribute in the nodes of the graph to determine their type.
  • node_type_default (str, optional) – This is the default node type to use for nodes that do not have an explicit type.
  • edge_type_attr (str, optional) – This is the name for the edge types that StellarGraph uses when processing heterogeneous graphs. StellarGraph will look for this attribute in the edges of the graph to determine their type.
  • edge_type_default (str, optional) – This is the default edge type to use for edges that do not have an explicit type.
  • node_features (str, dict, list or DataFrame optional) – This tells StellarGraph where to find the node feature information required by some graph models. These are expected to be a numeric feature vector for each node in the graph.
  • edge_weight_attr (str, optional) – The name of the attribute to use as the weight of edges.
  • node_type_name – Deprecated, use node_type_attr.
  • edge_type_name – Deprecated, use edge_type_attr.
  • edge_weight_label – Deprecated, use edge_weight_attr.
Returns:

A StellarGraph (if graph is undirected) or StellarDiGraph (if graph is directed) instance representing the data in graph and node_features.

has_node(node: Any) → bool[source]

Indicates whether or not the graph contains the specified node.

Parameters:node (any) – The node.
Returns:A value of True (cf False) if the node is (cf is not) in the graph.
Return type:bool
in_nodes(node: Any, include_edge_weight=False, edge_types=None) → Iterable[Any][source]

Obtains the collection of neighbouring nodes with edges directed to the given node. For an undirected graph, neighbours are treated as both in-nodes and out-nodes.

Parameters:
  • node (any) – The node in question.
  • include_edge_weight (bool, default False) – If True, each neighbour in the output is a named tuple with fields node (the node ID) and weight (the edge weight)
  • edge_types (list of hashable, optional) – If provided, only traverse the graph via the provided edge types when collecting neighbours.
Returns:

The neighbouring in-nodes.

Return type:

iterable

info(show_attributes=True, sample=None)[source]

Return an information string summarizing information on the current graph. This includes node and edge type information and their attributes.

Note: This requires processing all nodes and edges and could take a long time for a large graph.

Parameters:
  • show_attributes (bool, default True) – If True, include attributes information
  • sample (int) – To speed up the graph analysis, use only a random sample of this many nodes and edges.
Returns:

An information string.

is_directed() → bool[source]

Indicates whether the graph is directed (True) or undirected (False).

Returns:The graph directedness status.
Return type:bool
neighbors(node: Any, include_edge_weight=False, edge_types=None) → Iterable[Any][source]

Obtains the collection of neighbouring nodes connected to the given node.

Parameters:
  • node (any) – The node in question.
  • include_edge_weight (bool, default False) – If True, each neighbour in the output is a named tuple with fields node (the node ID) and weight (the edge weight)
  • edge_types (list of hashable, optional) – If provided, only traverse the graph via the provided edge types when collecting neighbours.
Returns:

The neighbouring nodes.

Return type:

iterable

node_degrees() → Mapping[Any, int][source]

Obtains a map from node to node degree.

Returns:The degree of each node.
node_feature_sizes(node_types=None)[source]

Get the feature sizes for the specified node types.

Parameters:node_types (list, optional) – A list of node types. If None all current node types will be used.
Returns:A dictionary of node type and integer feature size.
node_features(nodes, node_type=None)[source]

Get the numeric feature vectors for the specified node or nodes. If the node type is not specified the node types will be found for all nodes. It is therefore important to supply the node_type for this method to be fast.

Parameters:
  • nodes (list or hashable) – Node ID or list of node IDs
  • node_type (hashable) – the type of the nodes.
Returns:

Numpy array containing the node features for the requested nodes.

node_type(node)[source]

Get the type of the node

Parameters:node – Node ID
Returns:Node type
node_types

Get a list of all node types in the graph.

Returns:set of types
nodes() → Iterable[Any][source]

Obtains the collection of nodes in the graph.

Returns:The graph nodes.
nodes_of_type(node_type=None)[source]

Get the nodes of the graph with the specified node types.

Parameters:node_type (hashable, optional) – a type of nodes that exist in the graph
Returns:A list of node IDs with type node_type
number_of_edges() → int[source]

Obtains the number of edges in the graph.

Returns:The number of edges.
Return type:int
number_of_nodes() → int[source]

Obtains the number of nodes in the graph.

Returns:The number of nodes.
Return type:int
out_nodes(node: Any, include_edge_weight=False, edge_types=None) → Iterable[Any][source]

Obtains the collection of neighbouring nodes with edges directed from the given node. For an undirected graph, neighbours are treated as both in-nodes and out-nodes.

Parameters:
  • node (any) – The node in question.
  • include_edge_weight (bool, default False) – If True, each neighbour in the output is a named tuple with fields node (the node ID) and weight (the edge weight)
  • edge_types (list of hashable, optional) – If provided, only traverse the graph via the provided edge types when collecting neighbours.
Returns:

The neighbouring out-nodes.

Return type:

iterable

to_adjacency_matrix(nodes: Optional[Iterable] = None, weighted=False)[source]

Obtains a SciPy sparse adjacency matrix of edge weights.

By default (weighted=False), each element of the matrix contains the number of edges between the two vertices (only 0 or 1 in a graph without multi-edges).

Parameters:
  • nodes (iterable) – The optional collection of nodes comprising the subgraph. If specified, then the adjacency matrix is computed for the subgraph; otherwise, it is computed for the full graph.
  • weighted (bool) – If true, use the edge weight column from the graph instead of edge counts (weights from multi-edges are summed).
Returns:

The weighted adjacency matrix.

to_networkx(node_type_name='label', edge_type_name='label', edge_weight_label='weight', feature_name='feature')[source]

Create a NetworkX MultiGraph or MultiDiGraph instance representing this graph.

Parameters:
  • node_type_name (str) – the name of the attribute to use to store a node’s type (or label).
  • edge_type_name (str) – the name of the attribute to use to store a edge’s type (or label).
  • edge_weight_label (str) – the name of the attribute to use to store a edge’s weight.
  • feature_name (str, optional) – the name of the attribute to use to store a node’s feature vector; if None, feature vectors are not stored within each node.
Returns:

An instance of networkx.MultiDiGraph (if directed) or networkx.MultiGraph (if undirected) containing all the nodes & edges and their types & features in this graph.

class stellargraph.core.GraphSchema(is_directed, node_types, edge_types, schema)[source]

Class to encapsulate the schema information for a heterogeneous graph.

Typically this should be created from a StellarGraph object, using the create_graph_schema() method.

edge_index(edge_type)[source]

Return edge type index from the type tuple

Parameters:index – Tuple of (node1_type, edge_type, node2_type)
Returns:Numerical edge type index
node_index(name)[source]

Return node type index from the type name

Parameters:index – name of the node type.
Returns:Numerical node type index
sampling_layout(head_node_types, num_samples)[source]

For a sampling scheme with a list of head node types and the number of samples per hop, return the map from the actual sample index to the adjacency list index.

Parameters:
  • head_node_types – A list of node types of the head nodes.
  • num_samples – A list of integers that are the number of neighbours to sample at each hop.
Returns:

A list containing, for each head node type, a list consisting of tuples of (node_type, sampling_index). The list matches the list given by the method type_adjacency_list(…) and can be used to reformat the samples given by SampledBreadthFirstWalk to that expected by the HinSAGE model.

sampling_tree(head_node_types, n_hops)[source]

Returns a sampling tree for the specified head node types for neighbours up to n_hops away. A unique ID is created for each sampling node.

Parameters:
  • head_node_types – An iterable of the types of the head nodes
  • n_hops – The number of hops away
Returns:

A list of the form [(type_adjacency_index, node_type, [children]), …] where children are (type_adjacency_index, node_type, [children])

type_adjacency_list(head_node_types, n_hops)[source]

Creates a BFS sampling tree as an adjacency list from head node types.

Each list element is a tuple of:

(node_type, [child_1, child_2, ...])

where child_k is an index pointing to the child of the current node.

Note that the children are ordered by edge type.

Parameters:
  • head_node_types – Node types of head nodes.
  • n_hops – How many hops to sample.
Returns:

List of form [ (node_type, [children]), ...]

Data

The data package contains classes and functions to read, process, and query graph data

class stellargraph.data.UniformRandomWalk(graph, graph_schema=None, seed=None)[source]

Performs uniform random walks on the given graph

run(nodes, n, length, seed=None)[source]

Perform a random walk starting from the root nodes.

Parameters:
  • nodes (list) – The root nodes as a list of node IDs
  • n (int) – Total number of random walks per root node
  • length (int) – Maximum length of each random walk
  • seed (int, optional) – Random number generator seed; default is None
Returns:

List of lists of nodes ids for each of the random walks

class stellargraph.data.BiasedRandomWalk(graph, graph_schema=None, seed=None)[source]

Performs biased second order random walks (like those used in Node2Vec algorithm https://snap.stanford.edu/node2vec/) controlled by the values of two parameters p and q.

run(nodes, n, length, p=1.0, q=1.0, seed=None, weighted=False)[source]

Perform a random walk starting from the root nodes.

Parameters:
  • nodes (list) – The root nodes as a list of node IDs
  • n (int) – Total number of random walks per root node
  • length (int) – Maximum length of each random walk
  • p (float, default 1.0) – Defines probability, 1/p, of returning to source node
  • q (float, default 1.0) – Defines probability, 1/q, for moving to a node away from the source node
  • seed (int, optional) – Random number generator seed; default is None
  • weighted (bool, default False) – Indicates whether the walk is unweighted or weighted
Returns:

List of lists of nodes ids for each of the random walks

class stellargraph.data.UniformRandomMetaPathWalk(graph, graph_schema=None, seed=None)[source]

For heterogeneous graphs, it performs uniform random walks based on given metapaths.

run(nodes, n, length, metapaths, seed=None)[source]

Performs metapath-driven uniform random walks on heterogeneous graphs.

Parameters:
  • nodes (list) – The root nodes as a list of node IDs
  • n (int) – Total number of random walks per root node
  • length (int) – Maximum length of each random walk
  • metapaths (list of list) – List of lists of node labels that specify a metapath schema, e.g., [[‘Author’, ‘Paper’, ‘Author’], [‘Author, ‘Paper’, ‘Venue’, ‘Paper’, ‘Author’]] specifies two metapath schemas of length 3 and 5 respectively.
  • seed (int, optional) – Random number generator seed; default is None
Returns:

List of lists of nodes ids for each of the random walks generated

class stellargraph.data.SampledBreadthFirstWalk(graph, graph_schema=None, seed=None)[source]

Breadth First Walk that generates a sampled number of paths from a starting node. It can be used to extract a random sub-graph starting from a set of initial nodes.

run(nodes, n_size, n=1, seed=None)[source]

Performs a sampled breadth-first walk starting from the root nodes.

Parameters:
  • nodes (list) – A list of root node ids such that from each node n BFWs will be generated up to the given depth d.
  • n_size (int) – The number of neighbouring nodes to expand at each depth of the walk. Sampling of
  • n (int, default 1) – Number of walks per node id.
  • with replacement is always used regardless of the node degree and number of neighbours (neighbours) –
  • requested.
  • seed (int, optional) – Random number generator seed; default is None
Returns:

A list of lists such that each list element is a sequence of ids corresponding to a BFW.

class stellargraph.data.SampledHeterogeneousBreadthFirstWalk(graph, graph_schema=None, seed=None)[source]

Breadth First Walk for heterogeneous graphs that generates a sampled number of paths from a starting node. It can be used to extract a random sub-graph starting from a set of initial nodes.

run(nodes, n_size, n=1, seed=None)[source]

Performs a sampled breadth-first walk starting from the root nodes.

Parameters:
  • nodes (list) – A list of root node ids such that from each node n BFWs will be generated with the number of samples per hop specified in n_size.
  • n_size (int) – The number of neighbouring nodes to expand at each depth of the walk. Sampling of
  • n (int, default 1) – Number of walks per node id. Neighbours with replacement is always used regardless of the node degree and number of neighbours requested.
  • seed (int, optional) – Random number generator seed; default is None
Returns:

A list of lists such that each list element is a sequence of ids corresponding to a sampled Heterogeneous BFW.

class stellargraph.data.TemporalRandomWalk(graph, graph_schema=None, seed=None)[source]

Warning

TemporalRandomWalk is experimental: requires more thorough testing and documentation (see: #827, #828, #832). It may be difficult to use and may have major changes at any time.

Performs temporal random walks on the given graph. The graph should contain numerical edge weights that correspond to the time at which the edge was created. Exact units are not relevant for the algorithm, only the relative differences (e.g. seconds, days, etc).

run(num_cw, cw_size, max_walk_length=80, initial_edge_bias=None, walk_bias=None, p_walk_success_threshold=0.01, seed=None)[source]

Perform a time respecting random walk starting from the root nodes.

Parameters:
  • num_cw (int) – Total number of context windows to generate. For comparable results to most other random walks, this should be a multiple of the number of nodes in the graph.
  • cw_size (int) – Size of context window. Also used as the minimum walk length, since a walk must generate at least 1 context window for it to be useful.
  • max_walk_length (int) – Maximum length of each random walk. Should be greater than or equal to the context window size.
  • initial_edge_bias (str, optional) – distribution to use when choosing a random initial temporal edge to start from. Available options are: * None (default) - the initial edge is picked from a uniform distribution * “exponential” - heavily biased towards more recent edges.
  • walk_bias (str, optional) –

    distribution to use when choosing a random neighbour to walk through. Available options are: * None (default) - neighbours are picked from a uniform distribution * “exponential” - exponentially decaying probability, resulting in a bias

    towards shorter time gaps
  • p_walk_success_threshold (float) – Lower bound for the proportion of successful (i.e. longer than minimum length) walks. If the 95% percentile of the estimated proportion is less than the provided threshold, a RuntimeError will be raised. The default value of 0.01 means an error is raised if less than 1% of the attempted random walks are successful. This parameter exists to catch any potential situation where too many unsuccessful walks can cause an infinite or very slow loop.
  • seed (int, optional) – Random number generator seed; default is None
Returns:

List of lists of node ids for each of the random walks

class stellargraph.data.UnsupervisedSampler(G, nodes=None, length=2, number_of_walks=1, seed=None)[source]

The UnsupervisedSampler is responsible for sampling walks in the given graph and returning positive and negative samples w.r.t. those walks, on demand.

The positive samples are all the (target, context) pairs from the walks and the negative samples are contexts generated for each target based on a sampling distribtution.

Currently only uniform random walks are performed, other walk strategies (such as second order walks) will be enabled in the future.

Parameters:
  • G (StellarGraph) – A stellargraph with features.
  • nodes (optional, iterable) – If not provided, all nodes in the graph are used.
  • length (int) – An integer giving the length of the walks. Length must be at least 2.
  • number_of_walks (int) – Number of walks from each root node.
run(batch_size)[source]

This method returns a batch_size number of positive and negative samples from the graph. A random walk is generated from each root node, which are transformed into positive context pairs, and the same number of negative pairs are generated from a global node sampling distribution. The resulting list of context pairs are shuffled and converted to batches of size batch_size.

Currently the global node sampling distribution for the negative pairs is the degree distribution to the 3/4 power. This is the same used in node2vec (https://snap.stanford.edu/node2vec/).

Parameters:batch_size (int) – The number of samples to generate for each batch. This must be an even number.
Returns:List of batches, where each batch is a tuple of (list context pairs, list of labels)
class stellargraph.data.EdgeSplitter(g, g_master=None)[source]

Class for generating training and test data for link prediction in graphs.

The class requires as input a graph (in networkx format) and a percentage as a function of the total number of edges in the given graph of the number of positive and negative edges to sample. For heterogeneous graphs, the caller can also specify the type of edge and an edge property to split on. In the latter case, only a date property can be used and it must be in the format dd/mm/yyyy. A date to be used as a threshold value such that only edges that have date after the threshold must be given. This effects only the sampling of positive edges.

Negative edges are sampled at random by uniformly (for ‘global’ method) selecting two nodes in the graph and then checking if these edges are connected or not. If not, the pair of nodes is considered a negative sample. Otherwise, it is discarded and the process repeats. Alternatively, negative edges are sampled (for ‘local’ method) using DFS search at a distance from the source node (selected uniformly at random from all nodes in the graph) sampled according to a given set of probabilities.

Positive edges can be sampled so that when they are subsequently removed from the graph, the reduced graph is either guaranteed, or not guaranteed, to remain connected. In the former case, graph connectivity is maintained by first calculating the minimum spanning tree. The edges that belong to the minimum spanning tree are protected from removal, and therefore cannot be sampled for the training set. The edges that do not belong to the minimum spanning tree are then sampled uniformly at random, until the required number of positive edges have been sampled for the training set. In the latter case, when connectedness of the reduced graph is not guaranteed, positive edges are sampled uniformly at random from all the edges in the graph, regardless of whether they belong to the spanning tree (which is not calculated in this case).

Parameters:
  • g – <StellarGraph or networkx object> The graph to sample edges from.
  • g_master – <StellarGraph or networkx object> The graph representing the original dataset and a superset of the graph g. If it is not None, then when positive and negative edges are sampled, care is taken to make sure that a true positive edge is not sampled as a negative edge.
train_test_split(p=0.5, method='global', probs=None, keep_connected=False, edge_label=None, edge_attribute_label=None, edge_attribute_threshold=None, attribute_is_datetime=None, seed=None)[source]

Generates positive and negative edges and a graph that has the same nodes as the original but the positive edges removed. It can be used to generate data from homogeneous and heterogeneous graphs.

For heterogeneous graphs, positive and negative examples can be generated based on specified edge type or edge type and edge property given a threshold value for the latter.

Parameters:
  • p – <float> Percent of edges to be returned. It is calculated as a function of the total number of edges in the original graph. If the graph is heterogeneous, the percentage is calculated as a function of the total number of edges that satisfy the edge_label, edge_attribute_label and edge_attribute_threshold values given.
  • method – <str> How negative edges are sampled. If ‘global’, then nodes are selected uniformly at random. If ‘local’ then the first nodes is sampled uniformly from all nodes in the graph, but the second node is chosen to be from the former’s local neighbourhood.
  • probs – <list> list The probabilities for sampling a node that is k-hops from the source node, e.g., [0.25, 0.75] means that there is a 0.25 probability that the target node will be 1-hope away from the source node and 0.75 that it will be 2 hops away from the source node. This only affects sampling of negative edges if method is set to ‘local’.
  • keep_connected – <True or False> If True then when positive edges are removed care is taken that the reduced graph remains connected. If False, positive edges are removed without guaranteeing the connectivity of the reduced graph.
  • edge_label – <str> If splitting based on edge type, then this parameter specifies the key for the type of edges to split on.
  • edge_attribute_label – <str> The label for the edge attribute to split on.
  • edge_attribute_threshold – <str> The threshold value applied to the edge attribute when sampling positive examples.
  • attribute_is_datetime – <boolean> Specifies if edge attribute is datetime or not.
  • seed – <int> seed for random number generator, positive int or 0
Returns:

The reduced graph (positive edges removed) and the edge data as 2 numpy arrays, the first array of dimensionality Nx2 (where N is the number of edges) holding the node ids for the edges and the second of dimensionality Nx1 holding the edge labels, 0 for negative and 1 for positive examples.

stellargraph.data.from_epgm(epgm_location, dataset_name=None, directed=False)[source]

Imports a graph stored in EPGM format to a NetworkX object

Parameters:
  • epgm_location (str) – The directory containing the EPGM data
  • dataset_name (str) – The name of the dataset to import
  • directed (bool) – If True, load as a directed graph, otherwise load as an undirected graph
Returns:

A NetworkX graph containing the data for the EPGM-stored graph.

Generators

The mapper package contains classes and functions to map graph data to neural network inputs

class stellargraph.mapper.FullBatchNodeGenerator(G, name=None, method='gcn', k=1, sparse=True, transform=None, teleport_probability=0.1)[source]

A data generator for use with full-batch models on homogeneous graphs, e.g., GCN, GAT, SGC. The supplied graph G should be a StellarGraph object that is ready for machine learning. Currently the model requires node features to be available for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

This generator will supply the features array and the adjacency matrix to a full-batch Keras graph ML model. There is a choice to supply either a sparse adjacency matrix (the default) or a dense adjacency matrix, with the sparse argument.

For these algorithms the adjacency matrix requires pre-processing and the ‘method’ option should be specified with the correct pre-processing for each algorithm. The options are as follows:

  • method='gcn' Normalizes the adjacency matrix for the GCN algorithm. This implements the linearized convolution of Eq. 8 in [1].
  • method='chebyshev': Implements the approximate spectral convolution operator by implementing the k-th order Chebyshev expansion of Eq. 5 in [1].
  • method='sgc': This replicates the k-th order smoothed adjacency matrix to implement the Simplified Graph Convolutions of Eq. 8 in [2].
  • method='self_loops' or method='gat': Simply sets the diagonal elements of the adjacency matrix to one, effectively adding self-loops to the graph. This is used by the GAT algorithm of [3].
  • method='ppnp' Calculates the personalized page rank matrix of Eq 2 in [4].

[1] Kipf and Welling, 2017. [2] Wu et al. 2019. [3] Veličković et al., 2018 [4] Klicpera et al., 2018.

Example:

G_generator = FullBatchNodeGenerator(G)
train_flow = G_generator.flow(node_ids, node_targets)

# Fetch the data from train_flow, and feed into a Keras model:
x_inputs, y_train = train_flow[0]
model.fit(x=x_inputs, y=y_train)

# Alternatively, use the generator itself with model.fit_generator:
model.fit_generator(train_flow, epochs=num_epochs)
For more information, please see the GCN/GAT, PPNP/APPNP and SGC demos:
https://github.com/stellargraph/stellargraph/blob/master/demos/
Parameters:
  • G (StellarGraphBase) – a machine-learning StellarGraph-type graph
  • name (str) – an optional name of the generator
  • method (str) – Method to pre-process adjacency matrix. One of ‘gcn’ (default), ‘chebyshev’,’sgc’, ‘self_loops’, or ‘none’.
  • k (None or int) – This is the smoothing order for the ‘sgc’ method or the Chebyshev series order for the ‘chebyshev’ method. In both cases this should be positive integer.
  • transform (callable) – an optional function to apply on features and adjacency matrix the function takes (features, Aadj) as arguments.
  • sparse (bool) – If True (default) a sparse adjacency matrix is used, if False a dense adjacency matrix is used.
  • teleport_probability (float) – teleport probability between 0.0 and 1.0. “probability” of returning to the starting node in the propagation step as in [4].
flow(node_ids, targets=None)[source]

Creates a generator/sequence object for training or evaluation with the supplied node ids and numeric targets.

Parameters:
  • node_ids – an iterable of node ids for the nodes of interest (e.g., training, validation, or test set nodes)
  • targets – a 1D or 2D array of numeric node targets with shape (len(node_ids) or (len(node_ids), target_size)`
Returns:

A NodeSequence object to use with GCN or GAT models in Keras methods fit_generator(), evaluate_generator(), and predict_generator()

class stellargraph.mapper.FullBatchLinkGenerator(G, name=None, method='gcn', k=1, sparse=True, transform=None, teleport_probability=0.1)[source]

A data generator for use with full-batch models on homogeneous graphs, e.g., GCN, GAT, SGC. The supplied graph G should be a StellarGraph object that is ready for machine learning. Currently the model requires node features to be available for all nodes in the graph.

Use the flow() method supplying the links as a list of (src, dst) tuples of node IDs and (optionally) targets.

This generator will supply the features array and the adjacency matrix to a full-batch Keras graph ML model. There is a choice to supply either a sparse adjacency matrix (the default) or a dense adjacency matrix, with the sparse argument.

For these algorithms the adjacency matrix requires pre-processing and the ‘method’ option should be specified with the correct pre-processing for each algorithm. The options are as follows:

  • method='gcn' Normalizes the adjacency matrix for the GCN algorithm. This implements the linearized convolution of Eq. 8 in [1].
  • method='chebyshev': Implements the approximate spectral convolution operator by implementing the k-th order Chebyshev expansion of Eq. 5 in [1].
  • method='sgc': This replicates the k-th order smoothed adjacency matrix to implement the Simplified Graph Convolutions of Eq. 8 in [2].
  • method='self_loops' or method='gat': Simply sets the diagonal elements of the adjacency matrix to one, effectively adding self-loops to the graph. This is used by the GAT algorithm of [3].
  • method='ppnp' Calculates the personalized page rank matrix of Eq 2 in [4].

[1] Kipf and Welling, 2017. [2] Wu et al. 2019. [3] Veličković et al., 2018 [4] Klicpera et al., 2018.

Example:

G_generator = FullBatchLinkGenerator(G)
train_flow = G_generator.flow([(1,2), (3,4), (5,6)], [0, 1, 1])

# Fetch the data from train_flow, and feed into a Keras model:
x_inputs, y_train = train_flow[0]
model.fit(x=x_inputs, y=y_train)

# Alternatively, use the generator itself with model.fit_generator:
model.fit_generator(train_flow, epochs=num_epochs)
For more information, please see the GCN, GAT, PPNP/APPNP and SGC demos:
https://github.com/stellargraph/stellargraph/blob/master/demos/
Parameters:
  • G (StellarGraphBase) – a machine-learning StellarGraph-type graph
  • name (str) – an optional name of the generator
  • method (str) – Method to pre-process adjacency matrix. One of ‘gcn’ (default), ‘chebyshev’,’sgc’, ‘self_loops’, or ‘none’.
  • k (None or int) – This is the smoothing order for the ‘sgc’ method or the Chebyshev series order for the ‘chebyshev’ method. In both cases this should be positive integer.
  • transform (callable) – an optional function to apply on features and adjacency matrix the function takes (features, Aadj) as arguments.
  • sparse (bool) – If True (default) a sparse adjacency matrix is used, if False a dense adjacency matrix is used.
  • teleport_probability (float) – teleport probability between 0.0 and 1.0. “probability” of returning to the starting node in the propagation step as in [4].
flow(link_ids, targets=None)[source]

Creates a generator/sequence object for training or evaluation with the supplied node ids and numeric targets.

Parameters:
  • link_ids – an iterable of link ids specified as tuples of node ids or an array of shape (N_links, 2) specifying the links.
  • targets – a 1D or 2D array of numeric node targets with shape (len(node_ids) or (len(node_ids), target_size)`
Returns:

A NodeSequence object to use with GCN or GAT models in Keras methods fit_generator(), evaluate_generator(), and predict_generator()

class stellargraph.mapper.GraphSAGENodeGenerator(G, batch_size, num_samples, seed=None, name=None)[source]

A data generator for node prediction with Homogeneous GraphSAGE models

At minimum, supply the StellarGraph, the batch size, and the number of node samples for each layer of the GraphSAGE model.

The supplied graph should be a StellarGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

Example:

G_generator = GraphSAGENodeGenerator(G, 50, [10,10])
train_data_gen = G_generator.flow(train_node_ids, train_node_labels)
test_data_gen = G_generator.flow(test_node_ids)
Parameters:
  • G (StellarGraph) – The machine-learning ready graph.
  • batch_size (int) – Size of batch to return.
  • num_samples (list) – The number of samples per layer (hop) to take.
  • seed (int) – [Optional] Random seed for the node sampler.
sample_features(head_nodes, batch_num)[source]

Sample neighbours recursively from the head nodes, collect the features of the sampled nodes, and return these as a list of feature arrays for the GraphSAGE algorithm.

Parameters:
  • head_nodes – An iterable of head nodes to perform sampling on.
  • batch_num (int) – Batch number
Returns:

A list of the same length as num_samples of collected features from the sampled nodes of shape: (len(head_nodes), num_sampled_at_layer, feature_size) where num_sampled_at_layer is the cumulative product of num_samples for that layer.

class stellargraph.mapper.DirectedGraphSAGENodeGenerator(G, batch_size, in_samples, out_samples, seed=None, name=None)[source]

A data generator for node prediction with homogeneous GraphSAGE models on directed graphs.

At minimum, supply the StellarDiGraph, the batch size, and the number of node samples (separately for in-nodes and out-nodes) for each layer of the GraphSAGE model.

The supplied graph should be a StellarDiGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

Example:

G_generator = DirectedGraphSAGENodeGenerator(G, 50, [10,5], [5,1])
train_data_gen = G_generator.flow(train_node_ids, train_node_labels)
test_data_gen = G_generator.flow(test_node_ids)
Parameters:
  • G (StellarDiGraph) – The machine-learning ready graph.
  • batch_size (int) – Size of batch to return.
  • in_samples (list) – The number of in-node samples per layer (hop) to take.
  • out_samples (list) – The number of out-node samples per layer (hop) to take.
  • seed (int) – [Optional] Random seed for the node sampler.
sample_features(head_nodes, batch_num)[source]

Sample neighbours recursively from the head nodes, collect the features of the sampled nodes, and return these as a list of feature arrays for the GraphSAGE algorithm.

Parameters:
  • head_nodes – An iterable of head nodes to perform sampling on.
  • batch_num (int) – Batch number
Returns:

(len(head_nodes), num_sampled_at_layer, feature_size) where num_sampled_at_layer is the total number (cumulative product) of nodes sampled at the given number of hops from each head node, given the sequence of in/out directions.

Return type:

A list of feature tensors from the sampled nodes at each layer, each of shape

class stellargraph.mapper.DirectedGraphSAGELinkGenerator(G, batch_size, in_samples, out_samples, seed=None, name=None)[source]

A data generator for link prediction with directed Homogeneous GraphSAGE models

At minimum, supply the StellarDiGraph, the batch size, and the number of node samples (separately for in-nodes and out-nodes) for each layer of the GraphSAGE model.

The supplied graph should be a StellarDiGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets, or an UnsupervisedSampler instance that generates node samples on demand, to get an object that can be used as a Keras data generator.

Example:

G_generator = DirectedGraphSageLinkGenerator(G, 50, [10,10], [10,10])
train_data_gen = G_generator.flow(edge_ids)
Parameters:
  • G (StellarGraph) – A machine-learning ready graph.
  • batch_size (int) – Size of batch of links to return.
  • in_samples (list) – The number of in-node samples per layer (hop) to take.
  • out_samples (list) – The number of out-node samples per layer (hop) to take.
  • seed (int or str) – Random seed for the sampling methods.
  • optional (name,) – Name of generator.
sample_features(head_links, batch_num)[source]

Sample neighbours recursively from the head links, collect the features of the sampled nodes, and return these as a list of feature arrays for the GraphSAGE algorithm.

Parameters:head_links – An iterable of head links to perform sampling on.
Returns:(len(head_nodes), num_sampled_at_layer, feature_size) where num_sampled_at_layer is the total number (cumulative product) of nodes sampled at the given number of hops from each head node, given the sequence of in/out directions.
Return type:A list of feature tensors from the sampled nodes at each layer, each of shape
class stellargraph.mapper.ClusterNodeGenerator(G, clusters=1, q=1, lam=0.1, name=None)[source]

A data generator for use with ClusterGCN models on homogeneous graphs, [1].

The supplied graph G should be a StellarGraph object that is ready for machine learning. Currently the model requires node features to be available for all nodes in the graph. Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

This generator will supply the features array and the adjacency matrix to a mini-batch Keras graph ML model.

[1] W. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, C. Hsieh, 2019.

For more information, please see the ClusterGCN demo:
https://github.com/stellargraph/stellargraph/blob/master/demos/
Parameters:
  • G (StellarGraph) – a machine-learning StellarGraph-type graph
  • clusters (int or list) – If int then it indicates the number of clusters (default is 1 that is the given graph). If clusters is greater than 1, then nodes are uniformly at random assigned to a cluster. If list, then it should be a list of lists of node IDs such that each list corresponds to a cluster of nodes in G. The clusters should be non-overlapping.
  • q (float) – The number of clusters to combine for each mini-batch. The default is 1.
  • lam (float) – The mixture coefficient for adjacency matrix normalisation.
  • name (str) – an optional name of the generator
flow(node_ids, targets=None, name=None)[source]

Creates a generator/sequence object for training, evaluation, or prediction with the supplied node ids and numeric targets.

Parameters:
  • node_ids (iterable) – an iterable of node ids for the nodes of interest (e.g., training, validation, or test set nodes)
  • targets (2d array, optional) – a 2D array of numeric node targets with shape (len(node_ids), target_size)
  • name (str, optional) – An optional name for the returned generator object.
Returns:

A ClusterNodeSequence object to use with ClusterGCN in Keras methods fit_generator(), evaluate_generator(), and predict_generator()

class stellargraph.mapper.GraphSAGELinkGenerator(G, batch_size, num_samples, seed=None, name=None)[source]

A data generator for link prediction with Homogeneous GraphSAGE models

At minimum, supply the StellarGraph, the batch size, and the number of node samples for each layer of the GraphSAGE model.

The supplied graph should be a StellarGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets, or an UnsupervisedSampler instance that generates node samples on demand, to get an object that can be used as a Keras data generator.

Example:

G_generator = GraphSageLinkGenerator(G, 50, [10,10])
train_data_gen = G_generator.flow(edge_ids)
Parameters:
  • G (StellarGraph) – A machine-learning ready graph.
  • batch_size (int) – Size of batch of links to return.
  • num_samples (list) – List of number of neighbour node samples per GraphSAGE layer (hop) to take.
  • seed (int or str) – Random seed for the sampling methods.
sample_features(head_links, batch_num)[source]

Sample neighbours recursively from the head nodes, collect the features of the sampled nodes, and return these as a list of feature arrays for the GraphSAGE algorithm.

Parameters:
  • head_links – An iterable of edges to perform sampling for.
  • batch_num (int) – Batch number
Returns:

A list of the same length as num_samples of collected features from the sampled nodes of shape: (len(head_nodes), num_sampled_at_layer, feature_size) where num_sampled_at_layer is the cumulative product of num_samples for that layer.

class stellargraph.mapper.HinSAGENodeGenerator(G, batch_size, num_samples, head_node_type, schema=None, seed=None, name=None)[source]

Keras-compatible data mapper for Heterogeneous GraphSAGE (HinSAGE)

At minimum, supply the StellarGraph, the batch size, and the number of node samples for each layer of the HinSAGE model.

The supplied graph should be a StellarGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

Note that the shuffle argument should be True for training and False for prediction.

Parameters:
  • G (StellarGraph) – The machine-learning ready graph
  • batch_size (int) – Size of batch to return
  • num_samples (list) – The number of samples per layer (hop) to take
  • head_node_type (str) – The node type that will be given to the generator using the flow method, the model will expect this node type.
  • schema (GraphSchema, optional) – Graph schema for G.
  • seed (int, optional) – Random seed for the node sampler

Example:

G_generator = HinSAGENodeGenerator(G, 50, [10,10])
train_data_gen = G_generator.flow(train_node_ids, train_node_labels)
test_data_gen = G_generator.flow(test_node_ids)
sample_features(head_nodes, batch_num)[source]

Sample neighbours recursively from the head nodes, collect the features of the sampled nodes, and return these as a list of feature arrays for the GraphSAGE algorithm.

Parameters:
  • head_nodes – An iterable of head nodes to perform sampling on.
  • batch_num (int) – Batch number
Returns:

A list of the same length as num_samples of collected features from the sampled nodes of shape: (len(head_nodes), num_sampled_at_layer, feature_size) where num_sampled_at_layer is the cumulative product of num_samples for that layer.

class stellargraph.mapper.HinSAGELinkGenerator(G, batch_size, num_samples, head_node_types, schema=None, seed=None, name=None)[source]

A data generator for link prediction with Heterogeneous HinSAGE models

At minimum, supply the StellarGraph, the batch size, and the number of node samples for each layer of the GraphSAGE model.

The supplied graph should be a StellarGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

The generator should be given the (src,dst) node types usng

  • It’s possible to do link prediction on a graph where that link type is completely removed from the graph (e.g., “same_as” links in ER)
Parameters:
  • g (StellarGraph) – A machine-learning ready graph.
  • batch_size (int) – Size of batch of links to return.
  • num_samples (list) – List of number of neighbour node samples per GraphSAGE layer (hop) to take.
  • head_node_types (list) – List of the types (str) of the two head nodes forming the node pair.
  • seed (int or str, optional) – Random seed for the sampling methods.

Example:

G_generator = HinSAGELinkGenerator(G, 50, [10,10])
data_gen = G_generator.flow(edge_ids)
sample_features(head_links, batch_num)[source]

Sample neighbours recursively from the head nodes, collect the features of the sampled nodes, and return these as a list of feature arrays for the GraphSAGE algorithm.

Parameters:
  • head_links (list) – An iterable of edges to perform sampling for.
  • batch_num (int) – Batch number
Returns:

A list of the same length as num_samples of collected features from the sampled nodes of shape: (len(head_nodes), num_sampled_at_layer, feature_size) where num_sampled_at_layer is the cumulative product of num_samples for that layer.

class stellargraph.mapper.Attri2VecNodeGenerator(G, batch_size, name=None)[source]

A node feature generator for node representation prediction with the attri2vec model.

At minimum, supply the StellarGraph and the batch size.

The supplied graph should be a StellarGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes to get an object that can be used as a Keras data generator.

Example:

G_generator = Attri2VecNodeGenerator(G, 50)
data_gen = G_generator.flow(node_ids)
Parameters:
  • G (StellarGraph) – The machine-learning ready graph.
  • batch_size (int) – Size of batch to return.
  • name (str or None) – Name of the generator (optional).
flow(node_ids)[source]

Creates a generator/sequence object for node representation prediction with the supplied node ids.

The node IDs are the nodes to inference on: the embeddings calculated for these nodes are passed to the downstream task. These are a subset/all of the nodes in the graph.

Parameters:node_ids – an iterable of node IDs.
Returns:A NodeSequence object to use with the Attri2Vec model in the Keras method predict_generator.
flow_from_dataframe(node_ids)[source]

Creates a generator/sequence object for node representation prediction with the supplied node ids.

Parameters:node_ids – a Pandas DataFrame of node_ids.
Returns:A NodeSequence object to use with the Attri2Vec model in the Keras method predict_generator.
sample_features(head_nodes, batch_num)[source]

Sample content features of the head nodes, and return these as a list of feature arrays for the attri2vec algorithm.

Parameters:
  • head_nodes – An iterable of head nodes to perform sampling on.
  • batch_num (int) – Batch number
Returns:

A list of feature arrays, with each element being the feature of a head node.

class stellargraph.mapper.Attri2VecLinkGenerator(G, batch_size, name=None)[source]

A data generator for context node prediction with the attri2vec model.

At minimum, supply the StellarGraph and the batch size.

The supplied graph should be a StellarGraph object that is ready for machine learning. Currently the model requires node features for all nodes in the graph.

Use the flow() method supplying the nodes and targets, or an UnsupervisedSampler instance that generates node samples on demand, to get an object that can be used as a Keras data generator.

Example:

G_generator = Attri2VecLinkGenerator(G, 50)
train_data_gen = G_generator.flow(edge_ids, edge_labels)
Parameters:
  • G (StellarGraph) – A machine-learning ready graph.
  • batch_size (int) – Size of batch of links to return.
  • optional (name,) – Name of generator.
sample_features(head_links, batch_num)[source]

Sample content features of the target nodes and the ids of the context nodes and return these as a list of feature arrays for the attri2vec algorithm.

Parameters:
  • head_links – An iterable of edges to perform sampling for.
  • batch_num (int) – Batch number
Returns:

A list of feature arrays, with each element being the feature of a target node and the id of the corresponding context node.

class stellargraph.mapper.RelationalFullBatchNodeGenerator(G, name=None, sparse=True, transform=None)[source]

A data generator for use with full-batch models on relational graphs e.g. RGCN.

The supplied graph G should be a StellarGraph or StellarDiGraph object that is ready for machine learning. Currently the model requires node features to be available for all nodes in the graph. Use the flow() method supplying the nodes and (optionally) targets to get an object that can be used as a Keras data generator.

This generator will supply the features array and the adjacency matrix to a full-batch Keras graph ML model. There is a choice to supply either a list of sparse adjacency matrices (the default) or a list of dense adjacency matrices, with the sparse argument.

For these algorithms the adjacency matrices require pre-processing and the default option is to normalize each row of the adjacency matrix so that it sums to 1. For customization a transformation (callable) can be passed that operates on the node features and adjacency matrix.

Example:

G_generator = RelationalFullBatchNodeGenerator(G)
train_data_gen = G_generator.flow(node_ids, node_targets)

# Fetch the data from train_data_gen, and feed into a Keras model:
# Alternatively, use the generator itself with model.fit_generator:
model.fit_generator(train_gen, epochs=num_epochs, ...)
Parameters:
  • G (StellarGraph) – a machine-learning StellarGraph-type graph
  • name (str) – an optional name of the generator
  • transform (callable) – an optional function to apply on features and adjacency matrix the function takes (features, Aadj) as arguments.
  • sparse (bool) – If True (default) a list of sparse adjacency matrices is used, if False a list of dense adjacency matrices is used.
flow(node_ids, targets=None)[source]

Creates a generator/sequence object for training or evaluation with the supplied node ids and numeric targets.

Parameters:
  • node_ids – and iterable of node ids for the nodes of interest (e.g., training, validation, or test set nodes)
  • targets – a 2D array of numeric node targets with shape (len(node_ids), target_size)
Returns:

A NodeSequence object to use with RGCN models in Keras methods fit_generator(), evaluate_generator(), and predict_generator()

class stellargraph.mapper.AdjacencyPowerGenerator(G, num_powers=10)[source]

Warning

AdjacencyPowerGenerator is experimental: lack of unit tests (see: #804). It may be difficult to use and may have major changes at any time.

A data generator for use with the Watch Your Step algorithm [1]. It calculates and returns the first num_powers of the adjacency matrix row by row.

Parameters:
  • G (StellarGraph) – a machine-learning StellarGraph-type graph
  • num_powers (int) – the number of adjacency powers to calculate. Defaults to 10 as this value was found to perform well by the authors of the paper.
flow(batch_size, threads=1)[source]

Creates the tensorflow.data.Dataset object for training node embeddings from powers of the adjacency matrix.

Parameters:
  • batch_size (int) – the number of rows of the adjacency powers to include in each batch.
  • threads (int) – the number of threads to use for pre-processing of batches.
Returns:

A tensorflow.data.Dataset object for training node embeddings from powers of the adjacency matrix.

class stellargraph.mapper.GraphWaveGenerator(G, scales=(5, 10), degree=20)[source]

Implementation of the GraphWave structural embedding algorithm from the paper: “Learning Structural Node Embeddings via Diffusion Wavelets” (https://arxiv.org/pdf/1710.10321.pdf)

This class is minimally with a StellarGraph object. Calling the flow function will return a tensorflow DataSet that contains the GraphWave embeddings.

This implementation differs from the paper by removing the automatic method of calculating scales. This method was found to not work well in practice, and replicating the results of the paper requires manually specifying much larger scales than those automatically calculated.

Parameters:
  • G (StellarGraph) – the StellarGraph object.
  • scales (iterable of floats) – the wavelet scales to use. Smaller values embed smaller scale structural features, and larger values embed larger structural features.
  • degree – the degree of the Chebyshev polynomial to use. Higher degrees yield more accurate results but at a higher computational cost. According to [1], the default value of 20 is accurate enough for most applications.

[1] D. I. Shuman, P. Vandergheynst, and P. Frossard, “Chebyshev Polynomial Approximation for Distributed Signal Processing,” https://arxiv.org/abs/1105.1891

flow(node_ids, sample_points, batch_size, targets=None, shuffle=False, seed=None, repeat=False, num_parallel_calls=1)[source]

Creates a tensorflow DataSet object of GraphWave embeddings.

The dimension of the embeddings are 2 * len(scales) * len(sample_points).

Parameters:
  • node_ids – an iterable of node ids for the nodes of interest (e.g., training, validation, or test set nodes)
  • sample_points – a 1D array of points at which to sample the characteristic function. This should be of the form: sample_points=np.linspace(0, max_val, number_of_samples) and is graph dependent.
  • batch_size (int) – the number of node embeddings to include in a batch.
  • targets – a 1D or 2D array of numeric node targets with shape (len(node_ids) or (len(node_ids), target_size)`
  • shuffle (bool) – indicates whether to shuffle the dataset after each epoch
  • seed (int,optional) – the random seed to use for shuffling the dataset
  • repeat (bool) – indicates whether iterating through the DataSet will continue infinitely or stop after one full pass.
  • num_parallel_calls (int) – number of threads to use.

GraphSAGE model

GraphSAGE and compatible aggregator layers

class stellargraph.layer.graphsage.GraphSAGE(layer_sizes, generator=None, aggregator=None, bias=True, dropout=0.0, normalize='l2', activations=None, **kwargs)[source]

Implementation of the GraphSAGE algorithm of Hamilton et al. with Keras layers. see: http://snap.stanford.edu/graphsage/

The model minimally requires specification of the layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer and a generator object.

Different neighbour node aggregators can also be specified with the aggregator argument, which should be the aggregator class, either MeanAggregator, MeanPoolingAggregator, MaxPoolingAggregator, or AttentionalAggregator.

To use this class as a Keras model, the features and graph should be supplied using the GraphSAGENodeGenerator class for node inference models or the GraphSAGELinkGenerator class for link inference models. The .build method should be used to create a Keras model from the GraphSAGE object.

Examples

Creating a two-level GrapSAGE node classification model with hidden node sizes of 8 and 4 and 10 neighbours sampled at each layer using an existing StellarGraph object G containing the graph and node features:

generator = GraphSAGENodeGenerator(G, batch_size=50, num_samples=[10,10])
gat = GraphSAGE(
        layer_sizes=[8, 4],
        activations=["relu","softmax"],
        generator=generator,
    )
x_inp, predictions = gat.build()

Note that passing a NodeSequence or LinkSequence object from the generator.flow(…) method as the generator= argument is now deprecated and the base generator object should be passed instead.

For more details, please see the GraphSAGE demo notebooks: demos/node-classification/graphsage/graphsage-cora-node-classification-example.ipynb

Parameters:
  • layer_sizes (list) – Hidden feature dimensions for each layer.
  • generator (GraphSAGENodeGenerator or GraphSAGELinkGenerator) – If specified n_samples and input_dim will be extracted from this object.
  • aggregator (class) – The GraphSAGE aggregator to use; defaults to the MeanAggregator.
  • bias (bool) – If True (default), a bias vector is learnt for each layer.
  • dropout (float) – The dropout supplied to each layer; defaults to no dropout.
  • normalize (str or None) – The normalization used after each layer; defaults to L2 normalization.
  • activations (list) – Activations applied to each layer’s output; defaults to [‘relu’, …, ‘relu’, ‘linear’].
  • kernel_regularizer (str or func) – The regulariser to use for the weights of each layer; defaults to None.
Note::

If a generator is not specified, then additional keyword arguments must be supplied:

  • n_samples (list): The number of samples per layer in the model.
  • input_dim (int): The dimensions of the node features used as input to the model.
  • multiplicity (int): The number of nodes to process at a time. This is 1 for a node inference and 2 for link inference (currently no others are supported).
build()[source]

Builds a GraphSAGE model for node or link/node pair prediction, depending on the generator used to construct the model (whether it is a node or link/node pair generator).

Returns:(x_inp, x_out), where x_inp is a list of Keras input tensors for the specified GraphSAGE model (either node or link/node pair model) and x_out contains model output tensor(s) of shape (batch_size, layer_sizes[-1])
Return type:tuple

Builds a GraphSAGE model for link or node pair prediction

Returns:(x_inp, x_out) where x_inp is a list of Keras input tensors for (src, dst) node pairs (where (src, dst) node inputs alternate), and x_out is a list of output tensors for (src, dst) nodes in the node pairs
Return type:tuple
node_model()[source]

Builds a GraphSAGE model for node prediction

Returns:(x_inp, x_out) where x_inp is a list of Keras input tensors for the specified GraphSAGE model and x_out is the Keras tensor for the GraphSAGE model output.
Return type:tuple
class stellargraph.layer.graphsage.DirectedGraphSAGE(layer_sizes, generator=None, aggregator=None, bias=True, dropout=0.0, normalize='l2', activations=None, **kwargs)[source]

Implementation of a directed version of the GraphSAGE algorithm of Hamilton et al. with Keras layers. see: http://snap.stanford.edu/graphsage/

The model minimally requires specification of the layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer and a generator object.

Different neighbour node aggregators can also be specified with the aggregator argument, which should be the aggregator class, either MeanAggregator, MeanPoolingAggregator, MaxPoolingAggregator, or AttentionalAggregator.

Parameters:
  • layer_sizes (list) – Hidden feature dimensions for each layer.
  • generator (DirectedGraphSAGENodeGenerator) – If specified n_samples and input_dim will be extracted from this object.
  • aggregator (class, optional) – The GraphSAGE aggregator to use; defaults to the MeanAggregator.
  • bias (bool, optional) – If True (default), a bias vector is learnt for each layer.
  • dropout (float, optional) – The dropout supplied to each layer; defaults to no dropout.
  • normalize (str, optional) – The normalization used after each layer; defaults to L2 normalization.
  • kernel_regularizer (str or func, optional) – The regulariser to use for the weights of each layer; defaults to None.
Notes::

If a generator is not specified, then additional keyword arguments must be supplied:

  • in_samples (list): The number of in-node samples per layer in the model.
  • out_samples (list): The number of out-node samples per layer in the model.
  • input_dim (int): The dimensions of the node features used as input to the model.
  • multiplicity (int): The number of nodes to process at a time. This is 1 for a node inference and 2 for link inference (currently no others are supported).

Passing a NodeSequence or LinkSequence object from the generator.flow(…) method as the generator= argument is now deprecated and the base generator object should be passed instead.

class stellargraph.layer.graphsage.MeanAggregator(output_dim: int = 0, bias: bool = False, act: Union[Callable, AnyStr] = 'relu', **kwargs)[source]

Mean Aggregator for GraphSAGE implemented with Keras base layer

Parameters:
  • output_dim (int) – Output dimension
  • bias (bool) – Optional bias
  • act (Callable or str) – name of the activation function to use (must be a Keras activation function), or alternatively, a TensorFlow operation.
group_aggregate(x_group, group_idx=0)[source]

Mean aggregator for tensors over the neighbourhood for each group.

Parameters:
  • x_group (tf.Tensor) – : The input tensor representing the sampled neighbour nodes.
  • group_idx (int, optional) – Group index.
Returns:

A tensor aggregation of the input nodes features.

Return type:

[tf.Tensor]

class stellargraph.layer.graphsage.MeanPoolingAggregator(*args, **kwargs)[source]

Mean Pooling Aggregator for GraphSAGE implemented with Keras base layer

Implements the aggregator of Eq. (3) in Hamilton et al. (2017), with max pooling replaced with mean pooling

Parameters:
  • output_dim (int) – Output dimension
  • bias (bool) – Optional bias
  • act (Callable or str) – name of the activation function to use (must be a Keras activation function), or alternatively, a TensorFlow operation.
group_aggregate(x_group, group_idx=0)[source]

Aggregates the group tensors by mean-pooling of neighbours

Parameters:
  • x_group (tf.Tensor) – : The input tensor representing the sampled neighbour nodes.
  • group_idx (int, optional) – Group index.
Returns:

A tensor aggregation of the input nodes features.

Return type:

[tf.Tensor]

class stellargraph.layer.graphsage.MaxPoolingAggregator(*args, **kwargs)[source]

Max Pooling Aggregator for GraphSAGE implemented with Keras base layer

Implements the aggregator of Eq. (3) in Hamilton et al. (2017)

Parameters:
  • output_dim (int) – Output dimension
  • bias (bool) – Optional bias
  • act (Callable or str) – name of the activation function to use (must be a Keras activation function), or alternatively, a TensorFlow operation.
group_aggregate(x_group, group_idx=0)[source]

Aggregates the group tensors by max-pooling of neighbours

Parameters:
  • x_group (tf.Tensor) – : The input tensor representing the sampled neighbour nodes.
  • group_idx (int, optional) – Group index.
Returns:

A tensor aggregation of the input nodes features.

Return type:

[tf.Tensor]

class stellargraph.layer.graphsage.AttentionalAggregator(*args, **kwargs)[source]

Attentional Aggregator for GraphSAGE implemented with Keras base layer

Implements the aggregator of Veličković et al. “Graph Attention Networks” ICLR 2018

Parameters:
  • output_dim (int) – Output dimension
  • bias (bool) – Optional bias
  • act (Callable or str) – name of the activation function to use (must be a Keras activation function), or alternatively, a TensorFlow operation.
calculate_group_sizes(input_shape)[source]

Calculates the output size for each input group.

The results are stored in two variables:
  • self.included_weight_groups: if the corresponding entry is True then the input group is valid and should be used.
  • self.weight_sizes: the size of the output from this group.

The AttentionalAggregator is implemented to not use the first (head node) group. This makes the implmentation different from other aggregators.

Parameters:input_shape (list of list of int) – Shape of input tensors for self and neighbour features
call(inputs, **kwargs)[source]

Apply aggregator on the input tensors, inputs

Parameters:inputs (List[Tensor]) – Tensors giving self and neighbour features x[0]: self Tensor (batch_size, head size, feature_size) x[k>0]: group Tensors for neighbourhood (batch_size, head size, neighbours, feature_size)
Returns:Keras Tensor representing the aggregated embeddings in the input.

HinSAGE model

Heterogeneous GraphSAGE and compatible aggregator layers

class stellargraph.layer.hinsage.HinSAGE(layer_sizes, generator=None, aggregator=None, bias=True, dropout=0.0, normalize='l2', activations=None, **kwargs)[source]

Implementation of the GraphSAGE algorithm extended for heterogeneous graphs with Keras layers.

To use this class as a Keras model, the features and graph should be supplied using the HinSAGENodeGenerator class for node inference models or the HinSAGELinkGenerator class for link inference models. The .build method should be used to create a Keras model from the GraphSAGE object.

Currently the class supports node or link prediction models which are built depending on whether a HinSAGENodeGenerator or HinSAGELinkGenerator object is specified. The models are built for a single node or link type. For example if you have nodes of types ‘A’ and ‘B’ you can build a link model for only a single pair of node types, for example (‘A’, ‘B’), which should be specified in the HinSAGELinkGenerator.

If you feed links into the model that do not have these node types (in correct order) an error will be raised.

Examples

Creating a two-level GrapSAGE node classification model on nodes of type ‘A’ with hidden node sizes of 8 and 4 and 10 neighbours sampled at each layer using an existing StellarGraph object G containing the graph and node features:

generator = HinSAGENodeGenerator(
    G, batch_size=50, num_samples=[10,10], head_node_type='A'
    )
gat = HinSAGE(
        layer_sizes=[8, 4],
        activations=["relu","softmax"],
        generator=generator,
    )
x_inp, predictions = gat.build()

Creating a two-level GrapSAGE link classification model on nodes pairs of type (‘A’, ‘B’) with hidden node sizes of 8 and 4 and 5 neighbours sampled at each layer:

generator = HinSAGELinkGenerator(
    G, batch_size=50, num_samples=[5,5], head_node_types=('A','B')
    )
gat = HinSAGE(
        layer_sizes=[8, 4],
        activations=["relu","softmax"],
        generator=generator,
    )
x_inp, predictions = gat.build()

Note that passing a NodeSequence or LinkSequence object from the generator.flow(…) method as the generator= argument is now deprecated and the base generator object should be passed instead.

For more details, please see the HinSAGE demo notebooks: demos/node-classification/hinsage/yelp-example.py

Parameters:
  • layer_sizes (list) – Hidden feature dimensions for each layer
  • generator (HinSAGENodeGenerator or HinSAGELinkGenerator) – If specified, required model arguments such as the number of samples will be taken from the generator object. See note below.
  • aggregator (HinSAGEAggregator) – The HinSAGE aggregator to use; defaults to the MeanHinAggregator.
  • bias (bool) – If True (default), a bias vector is learnt for each layer.
  • dropout (float) – The dropout supplied to each layer; defaults to no dropout.
  • normalize (str) – The normalization used after each layer; defaults to L2 normalization.
  • activations (list) – Activations applied to each layer’s output; defaults to [‘relu’, …, ‘relu’, ‘linear’].
  • kernel_regularizer (str or func) – The regulariser to use for the weights of each layer; defaults to None.
Note::

If a generator is not specified, then additional keyword arguments must be supplied:

  • n_samples (list): The number of samples per layer in the model.
  • input_neighbor_tree: A list of (node_type, [children]) tuples that specify the subtree to be created by the HinSAGE model.
  • input_dim (dict): The input dimensions for each node type as a dictionary of the form {node_type: feature_size}.
  • multiplicity (int): The number of nodes to process at a time. This is 1 for a node inference and 2 for link inference (currently no others are supported).
build()[source]

Builds a HinSAGE model for node or link/node pair prediction, depending on the generator used to construct the model (whether it is a node or link/node pair generator).

Returns:(x_inp, x_out), where x_inp is a list of Keras input tensors for the specified HinSAGE model (either node or link/node pair model) and x_out contains model output tensor(s) of shape (batch_size, layer_sizes[-1]).
Return type:tuple
class stellargraph.layer.hinsage.MeanHinAggregator(output_dim: int = 0, bias: bool = False, act: Union[Callable, AnyStr] = 'relu', **kwargs)[source]

Mean Aggregator for HinSAGE implemented with Keras base layer

Parameters:
  • output_dim (int) – Output dimension
  • bias (bool) – Use bias in layer or not (Default False)
  • act (Callable or str) – name of the activation function to use (must be a Keras activation function), or alternatively, a TensorFlow operation.
  • kernel_initializer (str or func) – The initialiser to use for the weights; defaults to ‘glorot_uniform’.
  • kernel_regularizer (str or func) – The regulariser to use for the weights; defaults to None.
  • kernel_constraint (str or func) – The constraint to use for the weights; defaults to None.
  • bias_initializer (str or func) – The initialiser to use for the bias; defaults to ‘zeros’.
  • bias_regularizer (str or func) – The regulariser to use for the bias; defaults to None.
  • bias_constraint (str or func) – The constraint to use for the bias; defaults to None.
build(input_shape)[source]

Builds layer

Parameters:input_shape (list of list of int) – Shape of input per neighbour type.
call(x, **kwargs)[source]

Apply MeanAggregation on input tensors, x

Parameters:x

List of Keras Tensors with the following elements

  • x[0]: tensor of self features shape (n_batch, n_head, n_feat)
  • x[1+r]: tensors of neighbour features each of shape (n_batch, n_head, n_neighbour[r], n_feat[r])
Returns:Keras Tensor representing the aggregated embeddings in the input.
compute_output_shape(input_shape)[source]

Computes the output shape of the layer. Assumes that the layer will be built to match that input shape provided.

Parameters:input_shape (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization

Attri2Vec model

attri2vec

class stellargraph.layer.attri2vec.Attri2Vec(layer_sizes, generator=None, bias=False, activation='sigmoid', normalize=None, **kwargs)[source]

Implementation of the attri2vec algorithm of Zhang et al. with Keras layers. see: https://arxiv.org/abs/1901.04095.

The model minimally requires specification of the layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer and a generator object.

Parameters:
  • layer_sizes (list) – Hidden feature dimensions for each layer.
  • generator (Sequence) – A NodeSequence or LinkSequence.
  • input_dim (int) – The dimensions of the node features used as input to the model.
  • node_num (int) – The number of nodes in the given graph.
  • bias (bool) – If True a bias vector is learnt for each layer in the attri2vec model, default to False.
  • activation (str) – The activation function of each layer in the attri2vec model, which takes values from “linear”, “relu” and “sigmoid”(default).
  • normalize ("l2" or None) – The normalization used after each layer, default to None.
build()[source]

Builds a Attri2Vec model for node or link/node pair prediction, depending on the generator used to construct the model (whether it is a node or link/node pair generator).

Returns:(x_inp, x_out), where x_inp is a list of Keras input tensors for the specified Attri2Vec model (either node or link/node pair model) and x_out contains model output tensor(s) of shape (batch_size, layer_sizes[-1])
Return type:tuple

Builds a Attri2Vec model for context node prediction.

Returns:(x_inp, x_out) where x_inp is a list of Keras input tensors for (src, dst) nodes in the node pairs and x_out is a list of output tensors for (src, dst) nodes in the node pairs
Return type:tuple
node_model()[source]

Builds a Attri2Vec model for node representation prediction.

Returns:(x_inp, x_out) where x_inp is a Keras input tensor for the Attri2Vec model and x_out is the Keras tensor for the Attri2Vec model output.
Return type:tuple

GCN model

class stellargraph.layer.gcn.GCN(layer_sizes, generator, bias=True, dropout=0.0, activations=None, kernel_initializer=None, kernel_regularizer=None, kernel_constraint=None, bias_initializer=None, bias_regularizer=None, bias_constraint=None)[source]

A stack of Graph Convolutional layers that implement a graph convolution network model as in https://arxiv.org/abs/1609.02907

The model minimally requires specification of the layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer, activation functions for each hidden layers, and a generator object.

To use this class as a Keras model, the features and pre-processed adjacency matrix should be supplied using either the FullBatchNodeGenerator class for node inference or the FullBatchLinkGenerator class for link inference.

To have the appropriate pre-processing the generator object should be instanciated with the method=’gcn’ argument.

Note that currently the GCN class is compatible with both sparse and dense adjacency matrices and the FullBatchNodeGenerator will default to sparse.

For more details, please see the GCN demo notebook: demos/node-classification/gat/gcn-cora-node-classification-example.ipynb

Example

Creating a GCN node classification model from an existing StellarGraph object G:

generator = FullBatchNodeGenerator(G, method="gcn")
gcn = GCN(
        layer_sizes=[32, 4],
        activations=["elu","softmax"],
        generator=generator,
        dropout=0.5
    )
x_inp, predictions = gcn.build()

Notes

  • The inputs are tensors with a batch dimension of 1. These are provided by the FullBatchNodeGenerator object.
  • This assumes that the normalized Lapalacian matrix is provided as input to Keras methods. When using the FullBatchNodeGenerator specify the method='gcn' argument to do this pre-processing.
  • The nodes provided to the FullBatchNodeGenerator.flow method are used by the final layer to select the predictions for those nodes in order. However, the intermediate layers before the final layer order the nodes in the same way as the adjacency matrix.
Parameters:
  • layer_sizes (list of int) – Output sizes of GCN layers in the stack.
  • generator (FullBatchNodeGenerator) – The generator instance.
  • bias (bool) – If True, a bias vector is learnt for each layer in the GCN model.
  • dropout (float) – Dropout rate applied to input features of each GCN layer.
  • activations (list of str or func) – Activations applied to each layer’s output; defaults to [‘relu’, …, ‘relu’].
  • kernel_initializer (str or func, optional) – The initialiser to use for the weights of each layer.
  • kernel_regularizer (str or func, optional) – The regulariser to use for the weights of each layer.
  • kernel_constraint (str or func, optional) – The constraint to use for the weights of each layer.
  • bias_initializer (str or func, optional) – The initialiser to use for the bias of each layer.
  • bias_regularizer (str or func, optional) – The regulariser to use for the bias of each layer.
  • bias_constraint (str or func, optional) – The constraint to use for the bias of each layer.
build(multiplicity=None)[source]

Builds a GCN model for node or link prediction

Returns:(x_inp, x_out), where x_inp is a list of Keras/TensorFlow input tensors for the GCN model and x_out is a tensor of the GCN model output.
Return type:tuple
class stellargraph.layer.gcn.GraphConvolution(units, activation=None, use_bias=True, final_layer=False, input_dim=None, kernel_initializer='glorot_uniform', kernel_regularizer=None, kernel_constraint=None, bias_initializer='zeros', bias_regularizer=None, bias_constraint=None, **kwargs)[source]

Graph Convolution (GCN) Keras layer. The implementation is based on the keras-gcn github repo https://github.com/tkipf/keras-gcn.

Original paper: Semi-Supervised Classification with Graph Convolutional Networks. Thomas N. Kipf, Max Welling, International Conference on Learning Representations (ICLR), 2017 https://github.com/tkipf/gcn

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.
  • There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer) and the normalized graph Laplacian matrix
  • This class assumes that the normalized Laplacian matrix is passed as input to the Keras methods.
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • units (int) – dimensionality of output feature vectors
  • activation (str or func) – nonlinear activation applied to layer’s output to obtain output features
  • use_bias (bool) – toggles an optional bias
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
  • kernel_initializer (str or func, optional) – The initialiser to use for the weights.
  • kernel_regularizer (str or func, optional) – The regulariser to use for the weights.
  • kernel_constraint (str or func, optional) – The constraint to use for the weights.
  • bias_initializer (str or func, optional) – The initialiser to use for the bias.
  • bias_regularizer (str or func, optional) – The regulariser to use for the bias.
  • bias_constraint (str or func, optional) – The constraint to use for the bias.
build(input_shapes)[source]

Builds the layer

Parameters:input_shapes (list of int) – shapes of the layer’s inputs (node features and adjacency matrix)
call(inputs)[source]

Applies the layer.

Parameters:inputs (list) – a list of 3 input tensors that includes node features (size 1 x N x F), output indices (size 1 x M) graph adjacency matrix (size N x N), where N is the number of nodes in the graph, and F is the dimensionality of node features.
Returns:Keras Tensor that represents the output of the layer.
compute_output_shape(input_shapes)[source]

Computes the output shape of the layer. Assumes the following inputs:

Parameters:input_shapes (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization. Used by keras model serialization.

Returns:A dictionary that contains the config of the layer

Cluster-GCN model

class stellargraph.layer.cluster_gcn.ClusterGCN(layer_sizes, activations, generator, bias=True, dropout=0.0, **kwargs)[source]

A stack of Cluster Graph Convolutional layers that implement a cluster graph convolution network model as in https://arxiv.org/abs/1905.07953

The model minimally requires specification of the layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer, activation functions for each hidden layers, and a generator object.

To use this class as a Keras model, the features and pre-processed adjacency matrix should be supplied using the ClusterNodeGenerator class.

For more details, please see the Cluster-GCN demo notebook: demos/node-classification/clustergcn/cluster-gcn-node-classification.ipynb

Notes

  • The inputs are tensors with a batch dimension of 1. These are provided by the ClusterNodeGenerator object.
  • The nodes provided to the ClusterNodeGenerator.flow method are used by the final layer to select the predictions for those nodes in order. However, the intermediate layers before the final layer order the nodes in the same way as the adjacency matrix.

Examples

Creating a Cluster-GCN node classification model from an existing StellarGraph object G:

generator = ClusterNodeGenerator(G, clusters=10, q=2)
cluster_gcn = ClusterGCN(
                 layer_sizes=[32, 4],
                 activations=["elu","softmax"],
                 generator=generator,
                 dropout=0.5
    )
x_inp, predictions = cluster_gcn.build()
Parameters:
  • layer_sizes (list of int) – list of output sizes of the graph convolutional layers in the stack
  • activations (list of str) – list of activations applied to each layer’s output
  • generator (ClusterNodeGenerator) – an instance of ClusterNodeGenerator class constructed on the graph of interest
  • bias (bool) – toggles an optional bias in graph convolutional layers
  • dropout (float) – dropout rate applied to input features of each graph convolutional layer
  • kernel_regularizer (str) – normalization applied to the kernels of graph convolutional layers
build()[source]

Builds a Cluster-GCN model for node prediction.

Returns:(x_inp, x_out), where x_inp is a list of two input tensors for the Cluster-GCN model (containing node features and normalized adjacency matrix), and x_out is a tensor for the Cluster-GCN model output.
Return type:tuple
class stellargraph.layer.cluster_gcn.ClusterGraphConvolution(units, activation=None, use_bias=True, final_layer=False, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)[source]

Cluster Graph Convolution (GCN) Keras layer. A stack of such layers can be used to create a Cluster-GCN model.

The implementation is based on the GCN Keras layer of keras-gcn github repo https://github.com/tkipf/keras-gcn

Original paper: Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks, W. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C. Hsieh, KDD, 2019, https://arxiv.org/abs/1905.07953

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension.
  • There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer) and the normalized graph adjacency matrix.
  • This class assumes that the normalized graph adjacency matrix is passed as input to the Keras methods.
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • units (int) – dimensionality of output feature vectors
  • activation (str) – nonlinear activation applied to layer’s output to obtain output features
  • use_bias (bool) – toggles an optional bias
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
  • kernel_initializer (str) – name of layer bias f the initializer for kernel parameters (weights)
  • bias_initializer (str) – name of the initializer for bias
  • kernel_regularizer (str) – name of regularizer to be applied to layer kernel. Must be a Keras regularizer.
  • bias_regularizer (str) – name of regularizer to be applied to layer bias. Must be a Keras regularizer.
  • activity_regularizer (str) – not used in the current implementation
  • kernel_constraint (str) – constraint applied to layer’s kernel
  • bias_constraint (str) – constraint applied to layer’s bias
build(input_shapes)[source]

Builds the layer

Parameters:input_shapes (list of int) – shapes of the layer’s inputs (node features and adjacency matrix)
call(inputs)[source]

Applies the layer.

Parameters:inputs (list) – a list of 3 input tensors that includes node features (size 1 x N x F), output indices (size 1 x M) graph adjacency matrix (size 1 x N x N), where N is the number of nodes in the graph, and F is the dimensionality of node features.
Returns:Keras Tensor that represents the output of the layer.
compute_output_shape(input_shapes)[source]

Computes the output shape of the layer. Assumes the following inputs:

Parameters:input_shapes (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization. Used by keras model serialization.

Returns:A dictionary that contains the config of the layer

RGCN model

class stellargraph.layer.rgcn.RGCN(layer_sizes, generator, bias=True, num_bases=0, dropout=0.0, activations=None, **kwargs)[source]

A stack of Relational Graph Convolutional layers that implement a relational graph convolution neural network model as in https://arxiv.org/pdf/1703.06103.pdf

The model minimally requires specification of the layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer, activation functions for each hidden layers, and a generator object.

To use this class as a Keras model, the features and pre-processed adjacency matrix should be supplied using the RelationalFullBatchNodeGenerator class. The generator object should be instantiated as follows:

generator = RelationalFullBatchNodeGenerator(G)

Note that currently the RGCN class is compatible with both sparse and dense adjacency matrices and the RelationalFullBatchNodeGenerator will default to sparse.

For more details, please see the RGCN demo notebook:

demos/node-classification/rgcn/rgcn-aifb-node-classification-example.ipynb

Notes

  • The inputs are tensors with a batch dimension of 1. These are provided by the RelationalFullBatchNodeGenerator object.
  • The nodes provided to the RelationalFullBatchNodeGenerator.flow method are used by the final layer to select the predictions for those nodes in order. However, the intermediate layers before the final layer order the nodes in the same way as the adjacency matrix.

Examples

Creating a RGCN node classification model from an existing StellarGraph object G:

generator = RelationalFullBatchNodeGenerator(G)
rgcn = RGCN(
        layer_sizes=[32, 4],
        activations=["elu","softmax"],
        bases=10,
        generator=generator,
        dropout=0.5
    )
x_inp, predictions = rgcn.node_model()
Parameters:
  • layer_sizes (list of int) – Output sizes of RGCN layers in the stack.
  • generator (RelationalFullBatchNodeGenerator) – The generator instance.
  • num_bases (int) – Specifies number of basis matrices to use for the weight matrics of the RGCN layer as in the paper. Defaults to 0 which specifies that no basis decomposition is used.
  • bias (bool) – If True, a bias vector is learnt for each layer in the RGCN model.
  • dropout (float) – Dropout rate applied to input features of each RGCN layer.
  • activations (list of str or func) – Activations applied to each layer’s output; defaults to [‘relu’, …, ‘relu’].
  • kernel_regularizer (str or func) – The regulariser to use for the weights of each layer; defaults to None.
build()[source]

Builds a RGCN model for node prediction. Link/node pair prediction will added in the future.

Returns:(x_inp, x_out), where x_inp is a list of Keras input tensors for the specified RGCN model and x_out contains model output tensor(s) of shape (batch_size, layer_sizes[-1])
Return type:tuple
class stellargraph.layer.rgcn.RelationalGraphConvolution(units, num_relationships, num_bases=0, activation=None, use_bias=True, final_layer=False, **kwargs)[source]

Relational Graph Convolution (RGCN) Keras layer.

Original paper: Modeling Relational Data with Graph Convolutional Networks. Thomas N. Kipf, Michael Schlichtkrull (2017).

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.
  • There are 2 + R inputs required (where R is the number of relationships): the node features, the output indices (the nodes that are to be selected in the final layer) and a normalized adjacency matrix for each relationship
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • units (int) – dimensionality of output feature vectors
  • num_relationships (int) – the number of relationships in the graph
  • num_bases (int) – the number of basis matrices to use for parameterizing the weight matrices as described in the paper; defaults to 0. num_bases < 0 triggers the default behaviour of num_bases = 0
  • activation (str or func) – nonlinear activation applied to layer’s output to obtain output features
  • use_bias (bool) – toggles an optional bias
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
  • kernel_initializer (str or func) – The initialiser to use for the self kernel and also relational kernels if num_bases=0; defaults to ‘glorot_uniform’.
  • kernel_regularizer (str or func) – The regulariser to use for the self kernel and also relational kernels if num_bases=0; defaults to None.
  • kernel_constraint (str or func) – The constraint to use for the self kernel and also relational kernels if num_bases=0; defaults to None.
  • basis_initializer (str or func) – The initialiser to use for the basis matrices; defaults to ‘glorot_uniform’.
  • basis_regularizer (str or func) – The regulariser to use for the basis matrices; defaults to None.
  • basis_constraint (str or func) – The constraint to use for the basis matrices; defaults to None.
  • coefficient_initializer (str or func) – The initialiser to use for the coefficients; defaults to ‘glorot_uniform’.
  • coefficient_regularizer (str or func) – The regulariser to use for the coefficients; defaults to None.
  • coefficient_constraint (str or func) – The constraint to use for the coefficients; defaults to None.
  • bias_initializer (str or func) – The initialiser to use for the bias; defaults to ‘zeros’.
  • bias_regularizer (str or func) – The regulariser to use for the bias; defaults to None.
  • bias_constraint (str or func) – The constraint to use for the bias; defaults to None.
build(input_shapes)[source]

Builds the layer

Parameters:
  • input_shapes (list of int) – shapes of the layer’s inputs
  • features, node_indices, and adjacency matrices) ((node) –
call(inputs)[source]

Applies the layer.

Parameters:inputs (list) – a list of 2 + R input tensors that includes node features (size 1 x N x F), output indices (size 1 x M), and a graph adjacency matrix (size N x N) for each relationship. R is the number of relationships in the graph (edge type), N is the number of nodes in the graph, and F is the dimensionality of node features.
Returns:Keras Tensor that represents the output of the layer.
compute_output_shape(input_shapes)[source]

Computes the output shape of the layer.

Parameters:input_shapes (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization. Used by keras model serialization.

Returns:A dictionary that contains the config of the layer

PPNP model

class stellargraph.layer.ppnp.PPNP(layer_sizes, generator, activations, bias=True, dropout=0.0, kernel_regularizer=None)[source]

Implementation of Personalized Propagation of Neural Predictions (PPNP) as in https://arxiv.org/abs/1810.05997.

The model minimally requires specification of the fully connected layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer, activation functions for each hidden layers, and a generator object.

To use this class as a Keras model, the features and pre-processed adjacency matrix should be supplied using the FullBatchNodeGenerator class. To have the appropriate pre-processing the generator object should be instantiated as follows:

generator = FullBatchNodeGenerator(G, method="ppnp")

Notes

  • The inputs are tensors with a batch dimension of 1. These are provided by the FullBatchNodeGenerator object.
  • This assumes that the personalized page rank matrix is provided as input to Keras methods. When using the FullBatchNodeGenerator specify the method='ppnp' argument to do this pre-processing.
  • ‘’method=’ppnp’`` requires that use_sparse=False and generates a dense personalized page rank matrix
  • The nodes provided to the FullBatchNodeGenerator.flow method are used by the final layer to select the predictions for those nodes in order. However, the intermediate layers before the final layer order the nodes in the same way as the adjacency matrix.
  • The size of the final fully connected layer must be equal to the number of classes to predict.
Parameters:
  • layer_sizes (list of int) – list of output sizes of fully connected layers in the stack
  • activations (list of str) – list of activations applied to each fully connected layer’s output
  • generator (FullBatchNodeGenerator) – an instance of FullBatchNodeGenerator class constructed on the graph of interest
  • bias (bool) – toggles an optional bias in fully connected layers
  • dropout (float) – dropout rate applied to input features of each layer
  • kernel_regularizer (str) – normalization applied to the kernels of fully connetcted layers
build(multiplicity=None)[source]

Builds a PPNP model for node or link prediction

Returns:(x_inp, x_out), where x_inp is a list of Keras/TensorFlow input tensors for the model and x_out is a tensor of the model output.
Return type:tuple
class stellargraph.layer.ppnp.PPNPPropagationLayer(units, final_layer=False, **kwargs)[source]

Implementation of Personalized Propagation of Neural Predictions (PPNP) as in https://arxiv.org/abs/1810.05997.

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.
  • There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer) and the graph personalized page rank matrix
  • This class assumes that the personalized page rank matrix (specified in paper) matrix is passed as input to the Keras methods.
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • units (int) – dimensionality of output feature vectors
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
build(input_shapes)[source]

Builds the layer

Parameters:input_shapes (list of int) – shapes of the layer’s inputs (node features and adjacency matrix)
call(inputs)[source]

Applies the layer.

Parameters:inputs (list) – a list of 3 input tensors that includes node features (size 1 x N x F), output indices (size 1 x M) graph personalized page rank matrix (size N x N), where N is the number of nodes in the graph, and F is the dimensionality of node features.
Returns:Keras Tensor that represents the output of the layer.
compute_output_shape(input_shapes)[source]

Computes the output shape of the layer. Assumes the following inputs:

Parameters:input_shapes (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization. Used by keras model serialization.

Returns:A dictionary that contains the config of the layer

APPNP model

class stellargraph.layer.appnp.APPNP(layer_sizes, generator, activations, bias=True, dropout=0.0, teleport_probability=0.1, kernel_regularizer=None, approx_iter=10)[source]

Implementation of Approximate Personalized Propagation of Neural Predictions (APPNP) as in https://arxiv.org/abs/1810.05997.

The model minimally requires specification of the fully connected layer sizes as a list of ints corresponding to the feature dimensions for each hidden layer, activation functions for each hidden layers, and a generator object.

To use this class as a Keras model, the features and pre-processed adjacency matrix should be supplied using either the FullBatchNodeGenerator class for node inference or the FullBatchLinkGenerator class for link inference.

To have the appropriate pre-processing the generator object should be instanciated with the method=’gcn’ argument.

Example

Building an APPNP node model:

generator = FullBatchNodeGenerator(G, method="gcn")
ppnp = APPNP(
    layer_sizes=[64, 64, 1],
    activations=['relu', 'relu', 'relu'],
    generator=generator,
    dropout=0.5
)
x_in, x_out = ppnp.build()

Notes

  • The inputs are tensors with a batch dimension of 1. These are provided by the FullBatchNodeGenerator object.
  • This assumes that the normalized Laplacian matrix is provided as input to Keras methods. When using the FullBatchNodeGenerator specify the method='gcn' argument to do this pre-processing.
  • The nodes provided to the FullBatchNodeGenerator.flow method are used by the final layer to select the predictions for those nodes in order. However, the intermediate layers before the final layer order the nodes in the same way as the adjacency matrix.
  • The size of the final fully connected layer must be equal to the number of classes to predict.
Parameters:
  • layer_sizes (list of int) – list of output sizes of fully connected layers in the stack
  • activations (list of str) – list of activations applied to each fully connected layer’s output
  • generator (FullBatchNodeGenerator) – an instance of FullBatchNodeGenerator class constructed on the graph of interest
  • bias (bool) – toggles an optional bias in fully connected layers
  • dropout (float) – dropout rate applied to input features of each layer
  • kernel_regularizer (str) – normalization applied to the kernels of fully connetcted layers
  • teleport_probability – “probability” of returning to the starting node in the propagation step as desribed in
  • paper (the) –
  • approx_iter – number of iterations to approximate PPNP as described in the paper (K in the paper)
build(multiplicity=None)[source]

Builds an APPNP model for node or link prediction

Returns:(x_inp, x_out), where x_inp is a list of Keras/TensorFlow input tensors for the model and x_out is a tensor of the model output.
Return type:tuple
propagate_model(base_model)[source]

Propagates a trained model using personalised PageRank. :param base_model: trained model with node features as input, predicted classes as output :type base_model: keras Model

Returns:(x_inp, x_out), where x_inp is a list of two Keras input tensors for the APPNP model (containing node features and graph adjacency), and x_out is a Keras tensor for the APPNP model output.
Return type:tuple
class stellargraph.layer.appnp.APPNPPropagationLayer(units, teleport_probability=0.1, final_layer=False, **kwargs)[source]

Implementation of Approximate Personalized Propagation of Neural Predictions (PPNP) as in https://arxiv.org/abs/1810.05997.

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.
  • There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer) and the normalized graph Laplacian matrix
  • This class assumes that the normalized Laplacian matrix is passed as input to the Keras methods.
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • units (int) – dimensionality of output feature vectors
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
  • teleport_probability – “probability” of returning to the starting node in the propogation step as desribed in
  • paper (the) –
build(input_shapes)[source]

Builds the layer

Parameters:input_shapes (list of int) – shapes of the layer’s inputs (node features and adjacency matrix)
call(inputs)[source]

Applies the layer.

Parameters:inputs (list) – a list of 3 input tensors that includes propagated node features (size 1 x N x F), node features (size 1 x N x F), output indices (size 1 x M) graph adjacency matrix (size N x N), where N is the number of nodes in the graph, and F is the dimensionality of node features.
Returns:Keras Tensor that represents the output of the layer.
compute_output_shape(input_shapes)[source]

Computes the output shape of the layer. Assumes the following inputs:

Parameters:input_shapes (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization. Used by keras model serialization.

Returns:A dictionary that contains the config of the layer

GAT model

Definition of Graph Attention Network (GAT) layer, and GAT class that is a stack of GAT layers

class stellargraph.layer.graph_attention.GAT(layer_sizes, generator=None, attn_heads=1, attn_heads_reduction=None, bias=True, in_dropout=0.0, attn_dropout=0.0, normalize=None, activations=None, saliency_map_support=False, **kwargs)[source]

A stack of Graph Attention (GAT) layers with aggregation of multiple attention heads, Eqs 5-6 of the GAT paper https://arxiv.org/abs/1710.10903

To use this class as a Keras model, the features and pre-processed adjacency matrix should be supplied using either the FullBatchNodeGenerator class for node inference or the FullBatchLinkGenerator class for link inference.

To have the appropriate pre-processing the generator object should be instanciated with the method=’gat’ argument.

Examples

Creating a GAT node classification model from an existing StellarGraph object G:

generator = FullBatchNodeGenerator(G, method="gat")
gat = GAT(
        layer_sizes=[8, 4],
        activations=["elu","softmax"],
        attn_heads=8,
        generator=generator,
        in_dropout=0.5,
        attn_dropout=0.5,
    )
x_inp, predictions = gat.node_model()

For more details, please see the GAT demo notebook: demos/node-classification/gat/gat-cora-node-classification-example.ipynb

Notes

  • The inputs are tensors with a batch dimension of 1. These are provided by the FullBatchNodeGenerator object.
  • This does not add self loops to the adjacency matrix, you should preprocess the adjacency matrix to add self-loops, using the method='gat' argument of the FullBatchNodeGenerator.
  • The nodes provided to the FullBatchNodeGenerator.flow method are used by the final layer to select the predictions for those nodes in order. However, the intermediate layers before the final layer order the nodes in the same way as the adjacency matrix.
Parameters:
  • layer_sizes (list of int) – list of output sizes of GAT layers in the stack. The length of this list defines the number of GraphAttention layers in the stack.
  • generator (FullBatchNodeGenerator) – an instance of FullBatchNodeGenerator class constructed on the graph of interest
  • attn_heads (int or list of int) –

    number of attention heads in GraphAttention layers. The options are:

    • a single integer: the passed value of attn_heads will be applied to all GraphAttention layers in the stack, except the last layer (for which the number of attn_heads will be set to 1).
    • a list of integers: elements of the list define the number of attention heads in the corresponding layers in the stack.
  • attn_heads_reduction (list of str or None) – reductions applied to output features of each attention head, for all layers in the stack. Valid entries in the list are {‘concat’, ‘average’}. If None is passed, the default reductions are applied: ‘concat’ reduction to all layers in the stack except the final layer, ‘average’ reduction to the last layer (Eqs. 5-6 of the GAT paper).
  • bias (bool) – toggles an optional bias in GAT layers
  • in_dropout (float) – dropout rate applied to input features of each GAT layer
  • attn_dropout (float) – dropout rate applied to attention maps
  • normalize (str or None) – normalization applied to the final output features of the GAT layers stack. Default is None.
  • activations (list of str) – list of activations applied to each layer’s output; defaults to [‘elu’, …, ‘elu’].
  • saliency_map_support (bool) – If calculating saliency maps using the tools in stellargraph.interpretability.saliency_maps this should be True. Otherwise this should be False (default).
  • kernel_regularizer (str or func) – The regulariser to use for the head weights; defaults to None.
  • attn_kernel_regularizer (str or func) – The regulariser to use for the attention weights; defaults to None.
build(multiplicity=None)[source]

Builds a GAT model for node or link prediction

Returns:(x_inp, x_out), where x_inp is a list of Keras/TensorFlow input tensors for the model and x_out is a tensor of the model output.
Return type:tuple
class stellargraph.layer.graph_attention.GraphAttention(units, attn_heads=1, attn_heads_reduction='concat', in_dropout_rate=0.0, attn_dropout_rate=0.0, activation='relu', use_bias=True, final_layer=False, saliency_map_support=False, **kwargs)[source]

Graph Attention (GAT) layer. The base implementation is taken from https://github.com/danielegrattarola/keras-gat, with some modifications added for ease of use.

Based on the original paper: Graph Attention Networks. P. Velickovic et al. ICLR 2018 https://arxiv.org/abs/1710.10903

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.
  • There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer) and the graph adjacency matrix
  • This does not add self loops to the adjacency matrix, you should preprocess the adjacency matrix to add self-loops
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • F_out (int) – dimensionality of output feature vectors
  • attn_heads (int or list of int) – number of attention heads
  • attn_heads_reduction (str) – reduction applied to output features of each attention head, ‘concat’ or ‘average’. ‘Average’ should be applied in the final prediction layer of the model (Eq. 6 of the paper).
  • in_dropout_rate (float) – dropout rate applied to features
  • attn_dropout_rate (float) – dropout rate applied to attention coefficients
  • activation (str) – nonlinear activation applied to layer’s output to obtain output features (eq. 4 of the GAT paper)
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
  • use_bias (bool) – toggles an optional bias
  • saliency_map_support (bool) – If calculating saliency maps using the tools in stellargraph.interpretability.saliency_maps this should be True. Otherwise this should be False (default).
  • kernel_initializer (str or func) – The initialiser to use for the head weights; defaults to ‘glorot_uniform’.
  • kernel_regularizer (str or func) – The regulariser to use for the head weights; defaults to None.
  • kernel_constraint (str or func) – The constraint to use for the head weights; defaults to None.
  • bias_initializer (str or func) – The initialiser to use for the head bias; defaults to ‘zeros’.
  • bias_regularizer (str or func) – The regulariser to use for the head bias; defaults to None.
  • bias_constraint (str or func) – The constraint to use for the head bias; defaults to None.
  • attn_kernel_initializer (str or func) – The initialiser to use for the attention weights; defaults to ‘glorot_uniform’.
  • attn_kernel_regularizer (str or func) – The regulariser to use for the attention weights; defaults to None.
  • attn_kernel_constraint (str or func) – The constraint to use for the attention weights; defaults to None.
build(input_shapes)[source]

Builds the layer

Parameters:input_shapes (list of int) – shapes of the layer’s inputs (node features and adjacency matrix)
call(inputs)[source]

Creates the layer as a Keras graph.

Note that the inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.

There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer) and the graph adjacency matrix

Notes

This does not add self loops to the adjacency matrix. The output indices are only used when final_layer=True

Parameters:
  • inputs (list) – list of inputs with 3 items:
  • features (node) –
  • indices (output) –
  • adjacency matrix (graph) –
  • N is the number of nodes in the graph, (where) – F is the dimensionality of node features M is the number of output nodes
compute_output_shape(input_shapes)[source]

Computes the output shape of the layer. Assumes the following inputs:

Parameters:input_shapes (tuple of ints) – Shape tuples can include None for free dimensions, instead of an integer.
Returns:An input shape tuple.
get_config()[source]

Gets class configuration for Keras serialization

class stellargraph.layer.graph_attention.GraphAttentionSparse(units, attn_heads=1, attn_heads_reduction='concat', in_dropout_rate=0.0, attn_dropout_rate=0.0, activation='relu', use_bias=True, final_layer=False, saliency_map_support=False, **kwargs)[source]

Graph Attention (GAT) layer, base implementation taken from https://github.com/danielegrattarola/keras-gat, some modifications added for ease of use.

Based on the original paper: Graph Attention Networks. P. Velickovic et al. ICLR 2018 https://arxiv.org/abs/1710.10903

Notes

  • The inputs are tensors with a batch dimension of 1: Keras requires this batch dimension, and for full-batch methods we only have a single “batch”.
  • There are three inputs required, the node features, the output indices (the nodes that are to be selected in the final layer), and the graph adjacency matrix
  • This does not add self loops to the adjacency matrix, you should preprocess the adjacency matrix to add self-loops
  • The output indices are used when final_layer=True and the returned outputs are the final-layer features for the nodes indexed by output indices.
  • If final_layer=False all the node features are output in the same ordering as given by the adjacency matrix.
Parameters:
  • F_out (int) – dimensionality of output feature vectors
  • attn_heads (int or list of int) – number of attention heads
  • attn_heads_reduction (str) – reduction applied to output features of each attention head, ‘concat’ or ‘average’. ‘Average’ should be applied in the final prediction layer of the model (Eq. 6 of the paper).
  • in_dropout_rate (float) – dropout rate applied to features
  • attn_dropout_rate (float) – dropout rate applied to attention coefficients
  • activation (str) – nonlinear activation applied to layer’s output to obtain output features (eq. 4 of the GAT paper)
  • final_layer (bool) – If False the layer returns output for all nodes, if True it returns the subset specified by the indices passed to it.
  • use_bias (bool) – toggles an optional bias
  • saliency_map_support (bool) – If calculating saliency maps using the tools in stellargraph.interpretability.saliency_maps this should be True. Otherwise this should be False (default).
  • kernel_initializer (str or func) – The initialiser to use for the head weights; defaults to ‘glorot_uniform’.
  • kernel_regularizer (str or func) – The regulariser to use for the head weights; defaults to None.
  • kernel_constraint (str or func) – The constraint to use for the head weights; defaults to None.
  • bias_initializer (str or func) – The initialiser to use for the head bias; defaults to ‘zeros’.
  • bias_regularizer (str or func) – The regulariser to use for the head bias; defaults to None.
  • bias_constraint (str or func) – The constraint to use for the head bias; defaults to None.
  • attn_kernel_initializer (str or func) – The initialiser to use for the attention weights; defaults to ‘glorot_uniform’.
  • attn_kernel_regularizer (str or func) – The regulariser to use for the attention weights; defaults to None.
  • attn_kernel_constraint (str or func) – The constraint to use for the attention weights; defaults to None.
call(inputs, **kwargs)[source]

Creates the layer as a Keras graph

Notes

This does not add self loops to the adjacency matrix. The output indices are only used when final_layer=True

Parameters:
  • inputs (list) – list of inputs with 4 items:
  • features (node) –
  • indices (output) –
  • graph adjacency matrix (sparse) –
  • N is the number of nodes in the graph, (where) – F is the dimensionality of node features M is the number of output nodes

Watch Your Step model

class stellargraph.layer.watch_your_step.WatchYourStep(generator, num_walks=80, embedding_dimension=64, attention_initializer='glorot_uniform', attention_regularizer=None, attention_constraint=None)[source]

Warning

WatchYourStep is experimental: lack of unit tests (see: #804). It may be difficult to use and may have major changes at any time.

Implementation of the node embeddings as in Watch Your Step: Learning Node Embeddings via Graph Attention https://arxiv.org/pdf/1710.09599.pdf.

This model requires specification of the number of random walks starting from each node, and the embedding dimension to use for the node embeddings.

Parameters:
  • generator (AdjacencyPowerGenerator) – the generator
  • num_walks (int) – the number of random walks starting at each node to use when calculating the expected random walks. Defaults to 80 as this value was found to perform well by the authors of the paper.
  • dimension (embedding) – the dimension to use for the node embeddings (must be an even number).
  • attention_initializer (str or func, optional) – The initialiser to use for the attention weights.
  • attention_regularizer (str or func, optional) – The regulariser to use for the attention weights.
  • attention_constraint (str or func, optional) – The constraint to use for the attention weights.
build()[source]

This function builds the layers for a keras model.

Returns:A tuple of (inputs, outputs) to use with a keras model.

Knowledge Graph models

class stellargraph.mapper.knowledge_graph.KGTripleGenerator(G, batch_size)[source]

A data generator for working with triple-based knowledge graph models, like ComplEx.

This requires a StellarGraph that contains all nodes/entities and every edge/relation type that will be trained or predicted upon. The graph does not need to contain the edges/triples that are used for training or prediction.

Parameters:
  • G (StellarGraph) – the graph containing all nodes, and all edge types.
  • batch_size (int) – the size of the batches to generate
flow(edges, negative_samples=None, shuffle=False, seed=None)[source]

Create a Keras Sequence yielding the edges/triples in edges, potentially with some negative edges.

The negative edges are sampled using the “local closed world assumption”, where a source/subject or a target/object is randomly mutated.

Parameters:
  • edges – the edges/triples to feed into a knowledge graph model.
  • negative_samples (int, optional) – the number of negative samples to generate for each positive edge.
Returns:

A Keras sequence that can be passed to the fit and predict method of knowledge-graph models.

class stellargraph.layer.knowledge_graph.ComplEx(generator, k, embeddings_initializer='normal', embeddings_regularizer=None)[source]

Warning

ComplEx is experimental: results from the reference paper have not been reproduced yet (see: #862). It may be difficult to use and may have major changes at any time.

Embedding layers and a ComplEx scoring layers that implement the ComplEx knowledge graph embedding algorithm as in http://jmlr.org/proceedings/papers/v48/trouillon16.pdf

Parameters:
  • generator (KGTripleGenerator) – A generator of triples to feed into the model.
  • k (int) – the dimension of the embedding (that is, a vector in C^k is learnt for each node and each link type)
  • embeddings_initializer (str or func, optional) – The initialiser to use for the embeddings (the default of random normal values matches the paper’s reference implementation).
  • embeddings_regularizer (str or func, optional) – The regularizer to use for the embeddings.
build()[source]

Builds a ComplEx model.

Returns:A tuple of (list of input tensors, tensor for ComplEx model score outputs)
static embeddings(model)[source]

Retrieve the embeddings for nodes/entities and edge types/relations in the given model.

Parameters:model (tensorflow.keras.Model) – a Keras model created using a ComplEx instance.
Returns:the first element is the embeddings for nodes/entities (shape = number of nodes × k), the second element is the embeddings for edge types/relations (shape = number of edge types x k).
Return type:A tuple of numpy complex arrays
class stellargraph.layer.knowledge_graph.ComplExScore(*args, **kwargs)[source]

ComplEx scoring Keras layer.

Original Paper: Complex Embeddings for Simple Link Prediction, Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier and Guillaume Bouchard, ICML 2016. http://jmlr.org/proceedings/papers/v48/trouillon16.pdf

This combines subject, relation and object embeddings into a score of the likelihood of the link.

build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters:input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).
call(inputs)[source]

Applies the layer.

Parameters:inputs – a list of 6 tensors (each batch size x embedding dimension k), where the three consecutive pairs represent real and imaginary parts of the subject, relation and object embeddings, respectively, that is, inputs == [Re(subject), Im(subject), Re(relation), ...]

Ensembles

Ensembles of graph neural network models, GraphSAGE, GCN, GAT, and HinSAGE, with optional bootstrap sampling of the training data (implemented in the BaggingEnsemble class).

class stellargraph.ensemble.Ensemble(model, n_estimators=3, n_predictions=3)[source]

The Ensemble class can be used to create ensembles of stellargraph’s graph neural network algorithms including GCN, GraphSAGE, GAT, and HinSAGE. Ensembles can be used for training classification and regression problems for node attribute inference and link prediction.

The Ensemble class can be used to create Naive ensembles.

Naive ensembles add model diversity by random initialisation of the models’ weights (before training) to different values. Each model in the ensemble is trained on the same training set of examples.

compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None)[source]

Method for configuring the model for training. It is a wrapper of the keras.models.Model.compile method for all models in the ensemble.

For detailed descriptions of Keras-specific parameters consult the Keras documentation at https://keras.io/models/sequential/

Parameters:
  • optimizer (Keras optimizer or str) – (Keras-specific parameter) The optimizer to use given either as an instance of a keras optimizer or a string naming the optimiser of choice.
  • loss (Keras function or str) – (Keras-specific parameter) The loss function or string indicating the type of loss to use.
  • metrics (list or dict) – (Keras-specific parameter) List of metrics to be evaluated by each model in the ensemble during training and testing. It should be a list for a model with a single output. To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary.
  • loss_weights (None or list) – (Keras-specific parameter) Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a tensor, it is expected to map output names (strings) to scalar coefficients.
  • sample_weight_mode (None, str, list, or dict) – (Keras-specific parameter) If you need to do timestep-wise sample weighting (2D weights), set this to “temporal”. None defaults to sample-wise weights (1D). If the model has multiple outputs, you can use a different sample_weight_mode on each output by passing a dictionary or a list of modes.
  • weighted_metrics (list) – (Keras-specific parameter) List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing.
evaluate_generator(generator, test_data=None, test_targets=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)[source]

Evaluates the ensemble on a data (node or link) generator. It makes n_predictions for each data point for each of the n_estimators and returns the mean and standard deviation of the predictions.

For detailed descriptions of Keras-specific parameters consult the Keras documentation at https://keras.io/models/sequential/

Parameters:
  • generator – The generator object that, if test_data is not None, should be one of type GraphSAGENodeGenerator, HinSAGENodeGenerator, FullBatchNodeGenerator, GraphSAGELinkGenerator, or HinSAGELinkGenerator. However, if test_data is None, then generator should be one of type NodeSequence, LinkSequence, or FullBatchSequence.
  • test_data (None or iterable) – If not None, then it is an iterable, e.g. list, that specifies the node IDs to evaluate the model on.
  • test_targets (None or iterable) – If not None, then it is an iterable, e.g. list, that specifies the target values for the test_data.
  • max_queue_size (int) – (Keras-specific parameter) The maximum size for the generator queue.
  • workers (int) – (Keras-specific parameter) The maximum number of workers to use.
  • use_multiprocessing (bool) – (Keras-specific parameter) If True then use process based threading.
  • verbose (int) – (Keras-specific parameter) The verbocity mode that should be 0 or 1 with the former turning verbocity off and the latter on.
Returns:

The mean and standard deviation of the model metrics for the given data.

Return type:

tuple

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, validation_data=None, validation_steps=None, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0, use_early_stopping=False, early_stopping_monitor='val_loss')[source]

This method trains the ensemble on the data specified by the generator. If validation data are given, then the training metrics are evaluated on these data and results printed on screen if verbose level is greater than 0.

The method trains each model in the ensemble in series for the number of epochs specified. Training can also stop early with the best model as evaluated on the validation data, if use_early_stopping is set to True.

For detail descriptions of Keras-specific parameters consult the Keras documentation at https://keras.io/models/sequential/

Parameters:
  • generator – The generator object for training data. It should be one of type NodeSequence, LinkSequence, SparseFullBatchSequence, or FullBatchSequence.
  • steps_per_epoch (None or int) – (Keras-specific parameter) If not None, it specifies the number of steps to yield from the generator before declaring one epoch finished and starting a new epoch.
  • epochs (int) – (Keras-specific parameter) The number of training epochs.
  • verbose (int) – (Keras-specific parameter) The verbocity mode that should be 0 , 1, or 2 meaning silent, progress bar, and one line per epoch respectively.
  • validation_data – A generator for validation data that is optional (None). If not None then, it should be one of type NodeSequence, LinkSequence, SparseFullBatchSequence, or FullBatchSequence.
  • validation_steps (None or int) – (Keras-specific parameter) If validation_generator is not None, then it specifies the number of steps to yield from the generator before stopping at the end of every epoch.
  • class_weight (None or dict) – (Keras-specific parameter) If not None, it should be a dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.
  • max_queue_size (int) – (Keras-specific parameter) The maximum size for the generator queue.
  • workers (int) – (Keras-specific parameter) The maximum number of workers to use.
  • use_multiprocessing (bool) – (Keras-specific parameter) If True then use process based threading.
  • shuffle (bool) – (Keras-specific parameter) If True, then it shuffles the order of batches at the beginning of each training epoch.
  • initial_epoch (int) – (Keras-specific parameter) Epoch at which to start training (useful for resuming a previous training run).
  • use_early_stopping (bool) – If set to True, then early stopping is used when training each model in the ensemble. The default is False.
  • early_stopping_monitor (str) – The quantity to monitor for early stopping, e.g., ‘val_loss’, ‘val_weighted_acc’. It should be a valid Keras metric.
Returns:

It returns a list of Keras History objects each corresponding to one trained model in the ensemble.

Return type:

list

layers(indx=None)[source]

This method returns the layer objects for the model specified by the value of indx.

Parameters:indx (None or int) – The index (starting at 0) of the model to return the layers for. If it is None, then the layers for the 0-th (or first) model are returned.
Returns:The layers for the specified model.
Return type:list
predict_generator(generator, predict_data=None, summarise=False, output_layer=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)[source]

This method generates predictions for the data produced by the given generator or alternatively the data given in parameter predict_data.

For detailed descriptions of Keras-specific parameters consult the Keras documentation at https://keras.io/models/sequential/

Parameters:
  • generator – The generator object that, if predict_data is None, should be one of type GraphSAGENodeGenerator, HinSAGENodeGenerator, FullBatchNodeGenerator, GraphSAGELinkGenerator, or HinSAGELinkGenerator. However, if predict_data is not None, then generator should be one of type NodeSequence, LinkSequence, SparseFullBatchSequence, or FullBatchSequence.
  • predict_data (None or iterable) – If not None, then it is an iterable, e.g. list, that specifies the node IDs to make predictions for. If generator is of type FullBatchNodeGenerator then predict_data should be all the nodes in the graph since full batch approaches such as GCN and GAT can only be used to make predictions for all graph nodes.
  • summarise (bool) – If True, then the mean of the predictions over self.n_estimators and self.n_predictions are returned for each query point. If False, then all predictions are returned.
  • output_layer (None or int) – If not None, then the predictions are the outputs of the layer specified. The default is the model’s output layer.
  • max_queue_size (int) – (Keras-specific parameter) The maximum size for the generator queue.
  • workers (int) – (Keras-specific parameter) The maximum number of workers to use.
  • use_multiprocessing (bool) – (Keras-specific parameter) If True then use process based threading.
  • verbose (int) – (Keras-specific parameter) The verbocity mode that should be 0 or 1 with the former turning verbocity off and the latter on.
Returns:

The predictions. It will have shape MxKxNxF if summarise is set to False, or NxF otherwise. M is the number of estimators in the ensemble; K is the number of predictions per query point; N is the number of query points; and F is the output dimensionality of the specified layer determined by the shape of the output layer.

Return type:

numpy array

class stellargraph.ensemble.BaggingEnsemble(model, n_estimators=3, n_predictions=3)[source]

The BaggingEnsemble class can be used to create ensembles of stellargraph’s graph neural network algorithms including GCN, GraphSAGE, GAT, and HinSAGE. Ensembles can be used for training classification and regression problems for node attribute inference and link prediction.

This class can be used to create Bagging ensembles.

Bagging ensembles add model diversity in two ways: (1) by random initialisation of the models’ weights (before training) to different values; and (2) by bootstrap sampling of the training data for each model. That is, each model in the ensemble is trained on a random subset of the training examples, sampled with replacement from the original training data.

fit_generator(generator, train_data, train_targets, steps_per_epoch=None, epochs=1, verbose=1, validation_data=None, validation_steps=None, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0, bag_size=None, use_early_stopping=False, early_stopping_monitor='val_loss')[source]

This method trains the ensemble on the data given in train_data and train_targets. If validation data are also given, then the training metrics are evaluated on these data and results printed on screen if verbose level is greater than 0.

The method trains each model in the ensemble in series for the number of epochs specified. Training can also stop early with the best model as evaluated on the validation data, if use_early_stopping is enabled.

Each model in the ensemble is trained using a bootstrapped sample of the data (the train data are re-sampled with replacement.) The number of bootstrap samples can be specified via the bag_size parameter; by default, the number of bootstrap samples equals the number of training points.

For detail descriptions of Keras-specific parameters consult the Keras documentation at https://keras.io/models/sequential/

Parameters:
  • generator – The generator object for training data. It should be one of type GraphSAGENodeGenerator, HinSAGENodeGenerator, FullBatchNodeGenerator, GraphSAGELinkGenerator, or HinSAGELinkGenerator.
  • train_data (iterable) – It is an iterable, e.g. list, that specifies the data to train the model with.
  • train_targets (iterable) – It is an iterable, e.g. list, that specifies the target values for the train data.
  • steps_per_epoch (None or int) – (Keras-specific parameter) If not None, it specifies the number of steps to yield from the generator before declaring one epoch finished and starting a new epoch.
  • epochs (int) – (Keras-specific parameter) The number of training epochs.
  • verbose (int) – (Keras-specific parameter) The verbocity mode that should be 0 , 1, or 2 meaning silent, progress bar, and one line per epoch respectively.
  • validation_data – A generator for validation data that is optional (None). If not None then, it should be one of type GraphSAGENodeGenerator, HinSAGENodeGenerator, FullBatchNodeGenerator, GraphSAGELinkGenerator, or HinSAGELinkGenerator.
  • validation_steps (None or int) – (Keras-specific parameter) If validation_generator is not None, then it specifies the number of steps to yield from the generator before stopping at the end of every epoch.
  • class_weight (None or dict) – (Keras-specific parameter) If not None, it should be a dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.
  • max_queue_size (int) – (Keras-specific parameter) The maximum size for the generator queue.
  • workers (int) – (Keras-specific parameter) The maximum number of workers to use.
  • use_multiprocessing (bool) – (Keras-specific parameter) If True then use process based threading.
  • shuffle (bool) – (Keras-specific parameter) If True, then it shuffles the order of batches at the beginning of each training epoch.
  • initial_epoch (int) – (Keras-specific parameter) Epoch at which to start training (useful for resuming a previous training run).
  • bag_size (None or int) – The number of samples in a bootstrap sample. If None and bagging is used, then the number of samples is equal to the number of training points.
  • use_early_stopping (bool) – If set to True, then early stopping is used when training each model in the ensemble. The default is False.
  • early_stopping_monitor (str) – The quantity to monitor for early stopping, e.g., ‘val_loss’, ‘val_weighted_acc’. It should be a valid Keras metric.
Returns:

It returns a list of Keras History objects each corresponding to one trained model in the ensemble.

Return type:

list

Calibration

Calibration for classification, binary and multi-class, models.

stellargraph.calibration.expected_calibration_error(prediction_probabilities, accuracy, confidence)[source]

Helper function for calculating the expected calibration error as defined in the paper On Calibration of Modern Neural Networks, C. Guo, et. al., ICML, 2017

It is assumed that for a validation dataset, the prediction probabilities have been calculated for each point in the dataset and given in the array prediction_probabilities.

Parameters:
  • prediction_probabilities (numpy array) – The predicted probabilities.
  • accuracy (numpy array) – The accuracy such that the i-th entry in the array holds the proportion of correctly classified samples that fall in the i-th bin.
  • confidence (numpy array) – The confidence such that the i-th entry in the array is the average prediction probability over all the samples assigned to this bin.
Returns:

The expected calibration error.

Return type:

float

stellargraph.calibration.plot_reliability_diagram(calibration_data, predictions, ece=None, filename=None)[source]

Helper function for plotting a reliability diagram.

Parameters:
  • calibration_data (list) – The calibration data as a list where each entry in the list is a 2-tuple of type numpy.ndarray. Each entry in the tuple holds the fraction of positives and the mean predicted values for the true and predicted class labels.
  • predictions (np.ndarray) – The probabilistic predictions of the classifier for each sample in the dataset used for diagnosing miscalibration.
  • ece (None or list of float) – If not None, this list stores the expected calibration error for each class.
  • filename (str or None) – If not None, the figure is saved on disk in the given filename.
class stellargraph.calibration.TemperatureCalibration(epochs=1000)[source]

A class for temperature calibration for binary and multi-class classification problems.

For binary classification, Platt Scaling is used for calibration. Platt Scaling was proposed in the paper Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods, J. C. Platt, Advances in large margin classifiers, 10(3): 61-74, 1999.

For multi-class classification, Temperature Calibration is used. It is an extension of Platt Scaling and it was proposed in the paper On Calibration of Modern Neural Networks, C. Guo et. al., ICML, 2017.

In Temperature Calibration, a classifier’s non-probabilistic outputs, i.e., logits, are scaled by a trainable parameter called Temperature. The softmax is applied to the rescaled logits to calculate the probabilistic output. As noted in the cited paper, Temperature Scaling does not change the maximum of the softmax function so the classifier’s prediction remain the same.

fit(x_train, y_train, x_val=None, y_val=None)[source]

Train the calibration model.

For temperature scaling of a multi-class classifier, If validation data is given, then training stops when the validation accuracy starts increasing. Validation data are ignored for Platt scaling

Parameters:
  • x_train (numpy array) – The training data that should be a classifier’s non-probabilistic outputs. For calibrating a binary classifier it should have shape (N,) where N is the number of training samples. For calibrating a multi-class classifier, it should have shape (N, C) where N is the number of samples and C is the number of classes.
  • y_train (numpy array) – The training data class labels. For calibrating a binary classifier it should have shape (N,) where N is the number of training samples. For calibrating a multi-class classifier, it should have shape (N, C) where N is the number of samples and C is the number of classes and the class labels are one-hot encoded.
  • x_val (numpy array or None) – The validation data used only for calibrating multi-class classification models. It should have shape (M, C) where M is the number of validation samples and C is the number of classes and the class labels are one-hot encoded. that should be the classifier’s non-probabilistic outputs.
  • y_val (numpy array or None) – The validation data class labels used only for calibrating multi-class classification models. It should have shape (M, C) where M is the number of validation samples and C is the number of classes and the class labels are one-hot encoded.
plot_training_history()[source]

Helper function for plotting the training history.

predict(x)[source]

This method calibrates the given data using the learned temperature. It scales each logit by the temperature, exponentiates the results, and finally normalizes the scaled values such that their sum is 1.

Parameters:x (numpy.ndarray) – The logits. For binary classification problems, it should have dimensionality (N,) where N is the number of samples to calibrate. For multi-class problems, it should have dimensionality (N, C) where C is the number of classes.
Returns:The calibrated probabilities.
Return type:numpy array
class stellargraph.calibration.IsotonicCalibration[source]

A class for applying Isotonic Calibration to the outputs of a binary or multi-class classifier.

fit(x_train, y_train)[source]

Train a calibration model using the provided data.

Parameters:
  • x_train (numpy array) – The training data that should be the classifier’s probabilistic outputs. It should have shape NxC where N is the number of training samples and C is the number of classes.
  • y_train (numpy array) – The training class labels. For binary problems y_train has shape (N,) when N is the number of samples. For multi-class classification, y_train has shape (N,C) where C is the number of classes and y_train is using one-hot encoding.
predict(x)[source]

This method calibrates the given data assumed the output of a classification model.

For multi-class classification, the probabilities for each class are first scaled using the corresponding isotonic regression model and then normalized to sum to 1.

Parameters:x (numpy array) – The values to calibrate. For binary classification problems it should have shape (N,) where N is the number of samples to calibrate. For multi-class classification problems, it should have shape (N, C) where C is the number of classes.
Returns:The calibrated probabilities. It has shape (N, C) where N is the number of samples and C is the number of classes.
Return type:numpy array

Utilities

This contains the utility objects used by the StellarGraph library.

stellargraph.utils.plot_history(history, individual_figsize=(7, 4), **kwargs)[source]

Plot the training history of one or more models.

This creates a column of plots, with one plot for each metric recorded during training, with the plot showing the metric vs. epoch. If multiple models have been trained (that is, a list of histories is passed in), each metric plot includes multiple train and validation series.

Validation data is optional (it is detected by metrics with names starting with val_).

Parameters:
  • history – the training history, as returned by tf.keras.Model.fit()
  • individual_figsize (tuple of numbers) – the size of the plot for each metric
  • kwargs – additional arguments to pass to matplotlib.pyplot.subplots()

Datasets

stellargraph.datasets contains classes to download sample network datasets.

The default download path of stellargraph-datasets within the user’s home directory can be changed by setting the STELLARGRAPH_DATASETS_PATH environment variable, and each dataset will be downloaded to a subdirectory within this path.

class stellargraph.datasets.datasets.AIFB[source]

The AIFB dataset describes the AIFB research institute in terms of its staff, research group, and publications. First used for machine learning with RDF in Bloehdorn, Stephan and Sure, York, “Kernel Methods for Mining Instance Data in Ontologies”, The Semantic Web (2008), http://dx.doi.org/10.1007/978-3-540-76298-0_5. It contains ~8k entities, ~29k edges, and 45 different relationships or edge types. In (Bloehdorn et al 2007) the dataset was first used to predict the affiliation (i.e., research group) for people in the dataset. The dataset contains 178 members of a research group with 5 different research groups. The goal is to predict which research group a researcher belongs to.

Further details at: https://figshare.com/articles/AIFB_DataSet/745364

base_directory

The full path of the directory containing this dataset.

Type:str
data_directory

The full path of the directory containing the data content files for this dataset.

Type:str
download(ignore_cache: Optional[bool] = False) → None

Download the dataset (if not already downloaded)

Parameters:ignore_cache (bool, optional) – Ignore a cached dataset and force a re-download.
Raises:FileNotFoundError – If the dataset is not successfully downloaded.
class stellargraph.datasets.datasets.BlogCatalog3[source]

This dataset is crawled from a social blog directory website BlogCatalog http://www.blogcatalog.com and contains the friendship network crawled and group memberships.

Further details at: http://socialcomputing.asu.edu/datasets/BlogCatalog3

base_directory

The full path of the directory containing this dataset.

Type:str
data_directory

The full path of the directory containing the data content files for this dataset.

Type:str
download(ignore_cache: Optional[bool] = False) → None

Download the dataset (if not already downloaded)

Parameters:ignore_cache (bool, optional) – Ignore a cached dataset and force a re-download.
Raises:FileNotFoundError – If the dataset is not successfully downloaded.
load()[source]

Load this dataset into an undirected heterogeneous graph, downloading it if required.

The graph has two types of nodes, ‘user’ and ‘group’, and two types of edges, ‘friend’ and ‘belongs’. The ‘friend’ edges connect two ‘user’ nodes and the ‘belongs’ edges connects ‘user’ and ‘group’ nodes.

The node and edge types are not included in the dataset that is a collection of node and group ids along with the list of edges in the graph.

Important note about the node IDs: The dataset uses integers for node ids. However, the integers from 1 to 39 are used as IDs for both users and groups. This would cause a confusion when constructing the graph object. As a result, we convert all IDs to string and append the character ‘u’ to the integer ID for user nodes and the character ‘g’ to the integer ID for group nodes.

Returns:A StellarGraph object.
class stellargraph.datasets.datasets.CiteSeer[source]

The CiteSeer dataset consists of 3312 scientific publications classified into one of six classes. The citation network consists of 4732 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 3703 unique words.

Further details at: https://linqs.soe.ucsc.edu/data

base_directory

The full path of the directory containing this dataset.

Type:str
data_directory

The full path of the directory containing the data content files for this dataset.

Type:str
download(ignore_cache: Optional[bool] = False) → None

Download the dataset (if not already downloaded)

Parameters:ignore_cache (bool, optional) – Ignore a cached dataset and force a re-download.
Raises:FileNotFoundError – If the dataset is not successfully downloaded.
class stellargraph.datasets.datasets.Cora[source]

The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words.

Further details at: https://linqs.soe.ucsc.edu/data

base_directory

The full path of the directory containing this dataset.

Type:str
data_directory

The full path of the directory containing the data content files for this dataset.

Type:str
download(ignore_cache: Optional[bool] = False) → None

Download the dataset (if not already downloaded)

Parameters:ignore_cache (bool, optional) – Ignore a cached dataset and force a re-download.
Raises:FileNotFoundError – If the dataset is not successfully downloaded.
class stellargraph.datasets.datasets.MovieLens[source]

The MovieLens 100K dataset contains 100,000 ratings from 943 users on 1682 movies.

Further details at: https://grouplens.org/datasets/movielens/100k/

base_directory

The full path of the directory containing this dataset.

Type:str
data_directory

The full path of the directory containing the data content files for this dataset.

Type:str
download(ignore_cache: Optional[bool] = False) → None

Download the dataset (if not already downloaded)

Parameters:ignore_cache (bool, optional) – Ignore a cached dataset and force a re-download.
Raises:FileNotFoundError – If the dataset is not successfully downloaded.
class stellargraph.datasets.datasets.PubMedDiabetes[source]

The PubMed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words.

Further details at: https://linqs.soe.ucsc.edu/data

base_directory

The full path of the directory containing this dataset.

Type:str
data_directory

The full path of the directory containing the data content files for this dataset.

Type:str
download(ignore_cache: Optional[bool] = False) → None

Download the dataset (if not already downloaded)

Parameters:ignore_cache (bool, optional) – Ignore a cached dataset and force a re-download.
Raises:FileNotFoundError – If the dataset is not successfully downloaded.

Random

stellargraph.random contains functions to control the randomness behaviour in StellarGraph.

stellargraph.random.set_seed(seed)[source]

Create a new global RandomState using the provided seed. If seed is None, StellarGraph’s global RandomState object simply wraps the global random state for each external module.

When trying to create a reproducible workflow using this function, please note that this seed only controls the randomness of the non-tensorflow part of the library. Randomness within Tensorflow layers is controlled via Tensorflow’s own global random seed, which can be set using tensorflow.random.set_seed.

Parameters:seed (int, optional) – random seed