Skip to main content Skip to main navigation

Publication

Graph Neural Networks Need Cluster-Normalize-Activate Modules

Arseny Skryagin; Felix Divo; Mohammad Amin Ali; Devendra Singh Dhami; Kristian Kersting
In: Amir Globersons; Lester Mackey; Danielle Belgrave; Angela Fan; Ulrich Paquet; Jakub M. Tomczak; Cheng Zhang (Hrsg.). Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024. Neural Information Processing Systems (NeurIPS), arXiv, 2024.

Abstract

Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph- structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Clus- ter → Normalize → Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individ- ually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.

More links