Chaim Baskin presents Graph representation learning via aggregation enhancement.
On 2023-02-09 14:00:00 at G205, Karlovo náměstí 13, Praha 2
Graph neural networks (GNNs) have become increasingly popular for their ability
to handle graph-structured data. However, the mechanisms for proper aggregation
and propagation of information within these systems still need to be better
understood. In this talk, I will introduce a novel approach for improving
information aggregation and propagation in GNNs using kernel regression (KR)
methods. We demonstrate that minimizing KR loss leads to mutual information
(MI)
maximization. Based on that, we propose some KR configurations for supervised
and self-supervised graph representation learning. In a supervised setting,
using KR as a regularization term helps to prevent over-smoothing and
over-squashing in deep GNNs. We also introduce a self-supervised algorithm
named
Graph Information Representation Learning (GIRL), based on KR, which
consistently outperforms existing self-supervised methods on various datasets.
Our results highlight the potential for KR to improve the performance of GNNs
and contribute to the advancement of graph representation learning.
to handle graph-structured data. However, the mechanisms for proper aggregation
and propagation of information within these systems still need to be better
understood. In this talk, I will introduce a novel approach for improving
information aggregation and propagation in GNNs using kernel regression (KR)
methods. We demonstrate that minimizing KR loss leads to mutual information
(MI)
maximization. Based on that, we propose some KR configurations for supervised
and self-supervised graph representation learning. In a supervised setting,
using KR as a regularization term helps to prevent over-smoothing and
over-squashing in deep GNNs. We also introduce a self-supervised algorithm
named
Graph Information Representation Learning (GIRL), based on KR, which
consistently outperforms existing self-supervised methods on various datasets.
Our results highlight the potential for KR to improve the performance of GNNs
and contribute to the advancement of graph representation learning.