Understanding structured data across varied domains, from social networks and biological systems to knowledge graphs, is becoming increasingly critical.
Traditional deep learning models (e.g., convolutions, transformers) are well-suited to ordered data like images and text but often fall short when dealing with graph-structured data's irregularities and complexities.
This challenge has led to the development of Graph Neural Networks (GNNs), a class of deep learning models tailored to graph data, emphasising the crucial interdependencies between nodes.
Motivated by the intricate web of multi-faceted relationships found within various datasets, the research focus delves into machine learning of data rich systems such as causal graphs and hypergraphs.
Causal graphs encapsulate cause-and-effect dynamics and are pivotal in fields that require understanding the underlying mechanisms of observed phenomena, such as epidemiology and economics.
Hypergraphs take this a step further by capturing higher-order relationships, where edges can connect more than two nodes, facilitating a multidimensional analysis of interactions.
To Appear In Proceedings of EACL'24
Node-specific message passing radii in Graph Machine Learning for NLP, enhancing model flexibility validated by testing on several NLP tasks.