Beining Yang

ORCID: 0000-0002-0996-9745
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Graph Neural Networks
  • Complex Network Analysis Techniques
  • Text and Document Classification Technologies
  • Graph Theory and Algorithms
  • Neural Networks and Applications
  • Carbon Dioxide Capture Technologies
  • Metal-Organic Frameworks: Synthesis and Applications
  • Complexity and Algorithms in Graphs
  • Covalent Organic Framework Applications
  • Dementia and Cognitive Impairment Research
  • Machine Learning in Healthcare
  • Topic Modeling
  • Brain Tumor Detection and Classification
  • Advanced Graph Theory Research
  • Traumatic Brain Injury Research
  • Advanced Memory and Neural Computing
  • Machine Learning in Materials Science
  • Recommender Systems and Techniques
  • Semantic Web and Ontologies
  • Data Quality and Management
  • Optimization and Search Problems

Capital Medical University
2024-2025

University of Edinburgh
2024

Beihang University
2023

Northwestern Polytechnical University
2021

While graph representation learning methods have shown success in various mining tasks, what knowledge is exploited for predictions less discussed. This paper proposes a novel Adaptive Subgraph Neural Network named AdaSNN to find critical structures data, i.e., subgraphs that are dominant the prediction results. To detect of arbitrary size and shape absence explicit subgraph-level annotations, designs Reinforced Detection Module search adaptively without heuristic assumptions or predefined...

10.1109/tpami.2023.3235931 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2023-01-01

<title>Abstract</title> Background Post-Traumatic Stress Disorder (PTSD) is associated with neurobiological alterations, which can be examined using surface-based morphology (SBM). While machine learning (ML) approaches have shown potential in classifying PTSD based on SBM features, further exploration needed to improve interpretability and clinical relevance. Objectives This study seeks integrate ML-based classification of SHAP analysis identify important features their associations...

10.21203/rs.3.rs-5777371/v1 preprint EN cc-by Research Square (Research Square) 2025-03-25

Dataset condensation has significantly improved model training efficiency, but its application on devices with different computing power brings new requirements for data sizes. For sparse graph non-Euclidean structures, repeated of each scale may lead to significant computational costs. Thus, condensing multiple graphs simultaneously is the core achieving efficient in on-device scenarios. Existing works multi-scale dataset mainly perform approximate computation order (large-to-small or...

10.1609/aaai.v39i16.33832 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2025-04-11

Most Graph Neural Networks follow the message-passing paradigm, assuming observed structure depicts ground-truth node relationships. However, this fundamental assumption cannot always be satisfied, as real-world graphs are incomplete, noisy, or redundant. How to reveal inherent graph in a unified way remains under-explored. We proposed PRI-GSL, Structure Learning framework guided by Principle of Relevant Information, providing simple and for identifying self-organization revealing hidden...

10.1609/aaai.v37i4.25587 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2023-06-26

The text-attributed graph (TAG) is one kind of important real-world graph-structured data with each node associated raw texts. For TAGs, traditional few-shot classification methods directly conduct training on the pre-processed features and do not consider performance highly dependent choice feature pre-processing method. In this paper, we propose P2TAG, a framework designed for TAGs pre-training prompting. P2TAG first pre-trains language model (LM) neural network (GNN) self-supervised loss....

10.1145/3637528.3671952 article EN cc-by Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2024-08-24

Training on large-scale graphs has achieved remarkable results in graph representation learning, but its cost and storage have raised growing concerns. As one of the most promising directions, condensation methods address these issues by employing gradient matching, aiming to condense full into a more concise yet information-rich synthetic set. Though encouraging, strategies primarily emphasize matching directions gradients, which leads deviations training trajectories. Such are further...

10.48550/arxiv.2402.04924 preprint EN arXiv (Cornell University) 2024-02-07

Graph condensation (GC) has recently garnered considerable attention due to its ability reduce large-scale graph datasets while preserving their essential properties. The core concept of GC is create a smaller, more manageable that retains the characteristics original graph. Despite proliferation methods developed in recent years, there no comprehensive evaluation and in-depth analysis, which creates great obstacle understanding progress this field. To fill gap, we develop Condensation...

10.48550/arxiv.2407.00615 preprint EN arXiv (Cornell University) 2024-06-30

Abstract PTSD is a complex mental health condition triggered by individuals’ traumatic experiences, with long-term and broad impacts on sufferers’ psychological quality of life. Despite decades research providing partial understanding the pathobiological aspects PTSD, precise neurobiological markers imaging indicators remain challenging to pinpoint. This study employed VBM analysis machine learning algorithms investigate structural brain changes in patients. Data were sourced ADNI-DoD...

10.1007/s10278-024-01313-5 article EN cc-by Deleted Journal 2024-11-04

Dataset condensation has significantly improved model training efficiency, but its application on devices with different computing power brings new requirements for data sizes. Thus, condensing multiple scale graphs simultaneously is the core of achieving efficient in on-device scenarios. Existing works multi-scale graph dataset mainly perform approximate computation order (large-to-small or small-to-large scales). However, non-Euclidean structures sparse data, these two commonly used...

10.48550/arxiv.2412.17355 preprint EN arXiv (Cornell University) 2024-12-23

The development of improving energy efficiency and developing clean is relatively slow, so the efficient separation storage carbon dioxide (CO2) gas using some porous materials such as metal-organic frameworks (MOFs) has attracted much attention. This paper mainly focuses on current research adsorption in MOFs. Initially, traditional synthesis methods new MOF recent years activation that improve shortcomings are all reviewed. Besides, pristine MOFs with adjusting ligands also introduced...

10.1088/1742-6596/2021/1/012004 article EN Journal of Physics Conference Series 2021-10-01

Most Graph Neural Networks follow the message-passing paradigm, assuming observed structure depicts ground-truth node relationships. However, this fundamental assumption cannot always be satisfied, as real-world graphs are incomplete, noisy, or redundant. How to reveal inherent graph in a unified way remains under-explored. We proposed PRI-GSL, Structure Learning framework guided by Principle of Relevant Information, providing simple and for identifying self-organization revealing hidden...

10.48550/arxiv.2301.00015 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Training on large-scale graphs has achieved remarkable results in graph representation learning, but its cost and storage have attracted increasing concerns. Existing condensation methods primarily focus optimizing the feature matrices of condensed while overlooking impact structure information from original graphs. To investigate information, we conduct analysis spectral domain empirically identify substantial Laplacian Energy Distribution (LED) shifts previous works. Such lead to poor...

10.48550/arxiv.2310.09192 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...