Understanding structured data across varied domains, from social networks and biological systems to knowledge graphs, is becoming increasingly critical.
Traditional deep learning models (e.g., convolutions, transformers) are well-suited to ordered data like images and text but often fall short when dealing with graph-structured data's irregularities and complexities.
This challenge has led to the development of Graph Neural Networks (GNNs), a class of deep learning models tailored to graph data, emphasising the crucial interdependencies between nodes.
Motivated by the intricate web of multi-faceted relationships found within various datasets, the research focus delves into machine learning of data rich systems such as causal graphs and hypergraphs.
Causal graphs encapsulate cause-and-effect dynamics and are pivotal in fields that require understanding the underlying mechanisms of observed phenomena, such as epidemiology and economics.
Hypergraphs take this a step further by capturing higher-order relationships, where edges can connect more than two nodes, facilitating a multidimensional analysis of interactions.
In Proceedings of NeurIPS'19| code| slides
Innovative and effective extenion of graph neural networks to hypergraphs, proven by extensive real-world testing.
In Proceedings of NeurIPS'20| code
Frameworks that extend Message Passing Neural Networks to effectively handle multi-relational and recursive structures in real-world learning.
In Proceedings of ICDM'22| code
A convex approach to train two-layer ReLU-based Graph Neural Networks, ensuring global optimality in a field where theoretical understandings of optimisation have been limited.
To Appear In Proceedings of EACL'24
Node-specific message passing radii in Graph Machine Learning for NLP, enhancing model flexibility validated by testing on several NLP tasks.
In Proceedings of EMNLP'19| video
A summary of various Graph Neural Network models in NLP covering a broad range of NLP tasks such as relation extraction, question answering.