- Advanced Graph Neural Networks
- Topic Modeling
- Multimodal Machine Learning Applications
- Natural Language Processing Techniques
- Domain Adaptation and Few-Shot Learning
- Semantic Web and Ontologies
- Advanced Algorithms and Applications
- Wireless Sensor Networks and IoT
- Machine Learning in Healthcare
- Rough Sets and Fuzzy Logic
- Data-Driven Disease Surveillance
- Misinformation and Its Impacts
- Vaccine Coverage and Hesitancy
- Image Processing Techniques and Applications
- Multi-Criteria Decision Making
Zhejiang University
2023-2024
Zhejiang University of Science and Technology
2024
Chinese Center For Disease Control and Prevention
2023
Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal representation, which ignores the variations of preferences entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a mlti-modal transformer approach meta hybrid,...
Existing domain-specific Large Language Models (LLMs) are typically developed by fine-tuning general-purposed LLMs with large-scale corpora. However, training on corpora often fails to effectively organize domain knowledge of LLMs, leading fragmented understanding. Inspired how humans connect concepts and through mind maps, we aim emulate this approach using ontology hierarchical conceptual reorganize LLM's knowledge. From perspective, propose an ontology-driven self-training framework...
Multi-modal knowledge graphs (MMKG) store structured world containing rich multi-modal descriptive information. To overcome their inherent incompleteness, graph completion (MMKGC) aims to discover unobserved from given MMKGs, leveraging both structural information the triples and of entities. Existing MMKGC methods usually extract features with pre-trained models employ a fusion module integrate triple prediction. However, this often results in coarse handling data, overlooking nuanced,...
Graph neural network (GNN)-based methods have demonstrated remarkable performance in various knowledge graph (KG) tasks. However, most existing approaches rely on observing all entities during training, posing a challenge real-world graphs where new emerge frequently. To address this limitation, we introduce Decentralized Attention Network (DAN). DAN leverages neighbor context as the query vector to score neighbors of an entity, thereby distributing entity semantics only among its...
Multi-modal knowledge graph completion (MMKGC) aims to automatically discover the unobserved factual from a given multi-modal by collaboratively modeling triple structure and information entities.However, real-world MMKGs present challenges due their diverse imbalanced nature, which means that modality can span various types (e.g., image, text, numeric, audio, video) but its distribution among entities is uneven, leading missing modalities for certain entities.Existing works usually focus on...
Multi-modal knowledge graph completion (MMKGC) aims to automatically discover new triples in the given multi-modal graphs (MMKGs), which is achieved by collaborative modeling structural information concealed massive and features of entities. Existing methods tend focus on crafting elegant entity-wise fusion strategies, yet they overlook utilization multi-perspective within modalities under diverse relational contexts. To address this issue, we introduce a novel MMKGC framework with Mixture...
Multi-modal knowledge graph completion (MMKGC) aims to automatically discover the unobserved factual from a given multi-modal by collaboratively modeling triple structure and information entities. However, real-world MMKGs present challenges due their diverse imbalanced nature, which means that modality can span various types (e.g., image, text, numeric, audio, video) but its distribution among entities is uneven, leading missing modalities for certain Existing works usually focus on common...
The unprecedented developments in segmentation foundational models have become a dominant force the field of computer vision, introducing multitude previously unexplored capabilities wide range natural images and videos. Specifically, Segment Anything Model (SAM) signifies noteworthy expansion prompt-driven paradigm into domain image segmentation. recent introduction SAM2 effectively extends original SAM to streaming fashion demonstrates strong performance video However, due substantial...