torch_geometric.explain.algorithm.AttentionExplainer

class AttentionExplainer(reduce: str = 'max')[source]

Bases: ExplainerAlgorithm

An explainer that uses the attention coefficients produced by an attention-based GNN (e.g., GATConv, GATv2Conv, or TransformerConv) as edge explanation. Attention scores across layers and heads will be aggregated according to the reduce argument.

Parameters:

reduce (str, optional) – The method to reduce the attention scores across layers and heads. (default: "max")

forward(model: Module, x: Tensor, edge_index: Tensor, *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) Explanation[source]
forward(model: Module, x: Dict[str, Tensor], edge_index: Dict[Tuple[str, str, str], Tensor], *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) HeteroExplanation

Generate explanations based on attention coefficients.

Return type:

Union[Explanation, HeteroExplanation]

supports() bool[source]

Checks if the explainer supports the user-defined settings provided in self.explainer_config, self.model_config.

Return type:

bool