- Advanced Graph Neural Networks
- Topic Modeling
- Domain Adaptation and Few-Shot Learning
- Data Quality and Management
- Multimodal Machine Learning Applications
- Semantic Web and Ontologies
- Natural Language Processing Techniques
- Intelligent Tutoring Systems and Adaptive Learning
- Artificial Intelligence in Healthcare
- Text and Document Classification Technologies
- Machine Learning in Healthcare
- Machine Learning in Materials Science
- Epigenetics and DNA Methylation
- Rough Sets and Fuzzy Logic
- Multi-Criteria Decision Making
- DNA and Biological Computing
- Neural Networks and Applications
- Cognitive Computing and Networks
- Algorithms and Data Compression
- Complex Network Analysis Techniques
- Computational Drug Discovery Methods
Zhejiang University
2021-2024
Zhejiang University of Science and Technology
2022-2024
Nanjing University
2018-2019
Nanjing University of Information Science and Technology
2018
We study the problem of embedding-based entity alignment between knowledge graphs (KGs). Previous works mainly focus on relational structure entities. Some further incorporate another type features, such as attributes, for refinement. However, a vast features are still unexplored or not equally treated together, which impairs accuracy and robustness alignment. In this paper, we propose novel framework that unifies multiple views entities to learn embeddings Specifically, embed based names,...
Knowledge Graphs (KGs) play a pivotal role in advancing various AI applications, with the semantic web community's exploration into multi-modal dimensions unlocking new avenues for innovation. In this survey, we carefully review over 300 articles, focusing on KG-aware research two principal aspects: KG-driven Multi-Modal (KG4MM) learning, where KGs support tasks, and Graph (MM4KG), which extends KG studies MMKG realm. We begin by defining MMKGs, then explore their construction progress. Our...
We study the problem of knowledge graph (KG) embedding. A widely-established assumption to this is that similar entities are likely have relational roles. However, existing related methods derive KG embeddings mainly based on triple-level learning, which lack capability capturing long-term dependencies entities. Moreover, learning insufficient for propagation semantic information among entities, especially case cross-KG In paper, we propose recurrent skipping networks (RSNs), employ a...
Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal representation, which ignores the variations of preferences entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a mlti-modal transformer approach meta hybrid,...
Recent advancements in large language models (LLMs) have significantly improved various natural processing (NLP) tasks. Typically, LLMs are trained to predict the next token, aligning well with many NLP However, knowledge graph (KG) scenarios, entities fundamental units and identifying an entity requires at least several tokens. This leads a granularity mismatch between KGs languages. To address this issue, we propose K-ON, which integrates KG into LLM by employing multiple head layers for...
Multi-modal knowledge graph completion (MMKGC) aims to discover unobserved from given multi-modal graphs (MMKG), collaboratively leveraging structural information the triples and of entities overcome inherent incompleteness. Existing MMKGC methods usually extract features with pre-trained models employ fusion modules integrate for entities. This often results in coarse handling entity information, overlooking nuanced, fine-grained semantic details their complex interactions. To tackle this...
Recent advancements in large language models (LLMs) have significantly improved various natural processing (NLP) tasks. Typically, LLMs are trained to predict the next token, aligning well with many NLP However, knowledge graph (KG) scenarios, entities fundamental units and identifying an entity requires at least several tokens. This leads a granularity mismatch between KGs languages. To address this issue, we propose K-ON, which integrates KG into LLM by employing multiple head layers for...
Incompleteness of large knowledge graphs (KG) has motivated many researchers to propose methods automatically find missing edges in KGs. A promising approach for KG completion (link prediction) is embedding a into continuous vector space. There are different the literature that learn representation (latent features KG). The benchmark dataset FB15k been widely employed evaluate these methods. However, It noted contains pairs which pair represents same relationship reverse directions....
The advancement of Multi-modal Pre-training highlights the necessity for a robust Multi-Modal Knowledge Graph (MMKG) representation learning framework. This framework is crucial integrating structured knowledge into multi-modal Large Language Models (LLMs) at scale, aiming to alleviate issues like misconceptions and hallucinations. In this work, evaluate models' ability accurately embed entities within MMKGs, we focus on two widely researched tasks: Completion (MKGC) Entity Alignment (MMEA)....
Embedding-based methods have attracted increasing attention in recent entity alignment (EA) studies. Although great promise they can offer, there are still several limitations. The most notable is that identify the aligned entities based on cosine similarity, ignoring semantics underlying embeddings themselves. Furthermore, these shortsighted, heuristically selecting closest as target and allowing multiple to match same candidate. To address limitations, we model a sequential decision-making...
Multi-modal knowledge graphs (MMKG) store structured world containing rich multi-modal descriptive information. To overcome their inherent incompleteness, graph completion (MMKGC) aims to discover unobserved from given MMKGs, leveraging both structural information the triples and of entities. Existing MMKGC methods usually extract features with pre-trained models employ a fusion module integrate triple prediction. However, this often results in coarse handling data, overlooking nuanced,...
We study the problem of embedding-based entity alignment between knowledge graphs (KGs). Previous works mainly focus on relational structure entities. Some further incorporate another type features, such as attributes, for refinement. However, a vast features are still unexplored or not equally treated together, which impairs accuracy and robustness alignment. In this paper, we propose novel framework that unifies multiple views entities to learn embeddings Specifically, embed based names,...
Knowledge graph (KG) completion aims at filling the missing facts in a KG, where fact is typically represented as triple form of ( head, relation, tail). Traditional KG methods compel two-thirds provided (e.g., head and relation) to predict remaining one. In this paper, we propose new method that extends multi-layer recurrent neural networks (RNNs) model triples sequences. It obtains state-of-the-art performance on common entity prediction task, i.e., giving (or tail) relation tail head),...
We consider the problem of learning knowledge graph (KG) embeddings for entity alignment (EA). Current methods use embedding models mainly focusing on triple-level learning, which lacks ability capturing long-term dependencies existing in KGs. Consequently, embedding-based EA heavily rely amount prior (known) alignment, due to identity information cannot be efficiently propagated from one KG another. In this paper, we propose RSN4EA (recurrent skipping networks EA), leverages biased random...
Graph neural network (GNN)-based methods have demonstrated remarkable performance in various knowledge graph (KG) tasks. However, most existing approaches rely on observing all entities during training, posing a challenge real-world graphs where new emerge frequently. To address this limitation, we introduce Decentralized Attention Network (DAN). DAN leverages neighbor context as the query vector to score neighbors of an entity, thereby distributing entity semantics only among its...
The objective of Entity Alignment (EA) is to identify equivalent entity pairs from multiple Knowledge Graphs (KGs) and create a more comprehensive unified KG. majority EA methods have primarily focused on the structural modality KGs, lacking exploration multi-modal information. A few made good attempts in this field. Still, they two shortcomings: (1) inconsistent inefficient modeling that designs complex distinct models for each modality; (2) ineffective fusion due heterogeneous nature...