Thiviyan Thanapalasingam

ORCID: 0000-0002-0170-9105
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Semantic Web and Ontologies
  • Advanced Graph Neural Networks
  • Biomedical Text Mining and Ontologies
  • Topic Modeling
  • Data Quality and Management
  • Recommender Systems and Techniques
  • Mental Health Research Topics
  • Complex Network Analysis Techniques
  • Scientific Computing and Data Management
  • Identity, Memory, and Therapy
  • Machine Learning in Materials Science
  • Web Data Mining and Analysis
  • Bioinformatics and Genomic Networks
  • Natural Language Processing Techniques
  • Advanced Biosensing Techniques and Applications
  • Privacy-Preserving Technologies in Data
  • Explainable Artificial Intelligence (XAI)
  • Artificial Intelligence in Healthcare and Education
  • Advanced Text Analysis Techniques
  • Graph Theory and Algorithms
  • Computational Drug Discovery Methods
  • Functional Brain Connectivity Studies
  • Wikis in Education and Collaboration
  • Cell Image Analysis Techniques

University of Amsterdam
2022-2023

Vrije Universiteit Amsterdam
2022

The Open University
2018-2019

Open Knowledge (United Kingdom)
2019

Ontologies of research areas are important tools for characterizing, exploring, and analyzing the landscape. Some fields comprehensively described by large-scale taxonomies, e.g., MeSH in Biology PhySH Physics. Conversely, current Computer Science taxonomies coarse-grained tend to evolve slowly. For instance, ACM classification scheme contains only about 2K topics last version dates back 2012. In this paper, we introduce Ontology (CSO), a large-scale, automatically generated ontology areas,...

10.1162/dint_a_00055 article EN Data Intelligence 2019-12-12

In this article, we describe a reproduction of the Relational Graph Convolutional Network (RGCN). Using our reproduction, explain intuition behind model. Our results empirically validate correctness implementations using benchmark Knowledge datasets on node classification and link prediction tasks. explanation provides friendly understanding different components RGCN for both users researchers extending approach. Furthermore, introduce two new configurations that are more parameter...

10.7717/peerj-cs.1073 article EN cc-by PeerJ Computer Science 2022-11-02

Language Models (LMs) have proven to be useful in various downstream applications, such as summarisation, translation, question answering and text classification. LMs are becoming increasingly important tools Artificial Intelligence, because of the vast quantity information they can store. In this work, we present ProP (Prompting Probing), which utilizes GPT-3, a large Model originally proposed by OpenAI 2020, perform task Knowledge Base Construction (KBC). implements multi-step approach...

10.48550/arxiv.2208.11057 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Knowledge Graph Embedding (KGE) models are used to learn continuous representations of entities and relations. A key task in the literature is predicting missing links between entities. However, Graphs not just sets but also have semantics underlying their structure. Semantics crucial several downstream tasks, such as query answering or reasoning. We introduce subgraph inference task, where a model has generate likely semantically valid subgraphs. propose IntelliGraphs, set five new...

10.48550/arxiv.2307.06698 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Graph neural networks (GNNs) learn the representation of nodes in a graph by aggregating neighborhood information various ways. As these grow depth, their receptive field grows exponentially due to increase sizes, resulting high memory costs. sampling solves issues GNNs small ratio graph. This way, can scale much larger graphs. Most methods focus on fixed heuristics, which may not generalize different structures or tasks. We introduce GRAPES, an adaptive method that learns identify sets...

10.48550/arxiv.2310.03399 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting scalability PNL solutions. introduce Approximate Inference (A-NeSI): a new framework that uses scalable approximate inference. A-NeSI 1) performs inference in polynomial time without changing semantics probabilistic logics; 2) is trained using data generated by...

10.48550/arxiv.2212.12393 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...