- Advanced Graph Neural Networks
- Recommender Systems and Techniques
- Topic Modeling
- Domain Adaptation and Few-Shot Learning
- Advanced Image Processing Techniques
- Explainable Artificial Intelligence (XAI)
- Digital Media Forensic Detection
- Generative Adversarial Networks and Image Synthesis
- Bayesian Modeling and Causal Inference
- Text and Document Classification Technologies
- Anomaly Detection Techniques and Applications
- Advanced Neural Network Applications
- Graph Theory and Algorithms
- Semantic Web and Ontologies
- Visual Attention and Saliency Detection
- Green IT and Sustainability
- Artificial Intelligence in Games
- Machine Learning in Healthcare
- Biomedical Text Mining and Ontologies
- Smart Cities and Technologies
- Adversarial Robustness in Machine Learning
- Advanced Computing and Algorithms
- Gaze Tracking and Assistive Technology
- Machine Learning in Bioinformatics
- Video Analysis and Summarization
Tencent (China)
2025
University of Science and Technology of China
2021-2024
The University of Texas at Austin
2020
Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed convolutional (CNNs) image data, self-supervised pre-training are less explored GNNs. In this paper, we propose contrastive (GraphCL) framework unsupervised representations of data. We first design four types augmentations to incorporate various priors. then systematically study the impact combinations multiple...
In graph classification, attention and pooling-based neural networks (GNNs) prevail to extract the critical features from input support prediction. They mostly follow paradigm of learning attend, which maximizes mutual information between attended ground-truth label. However, this makes GNN classifiers recklessly absorb all statistical correlations labels in training data, without distinguishing causal noncausal effects features. Instead underscoring features, graphs are prone visit as...
In the realm of deep learning-based recommendation systems, increasing computational demands, driven by growing number users and items, pose a significant challenge to practical deployment. This is primarily twofold: reducing model size while effectively learning user item representations for efficient recommendations. Despite considerable advancements in compression architecture search, prevalent approaches face notable constraints. These include substantial additional costs from...
In graph classification, attention- and pooling-based neural networks (GNNs) predominate to extract salient features from the input support prediction. They mostly follow paradigm of “learning attend,” which maximizes mutual information between attended ground-truth label. However, this causes GNN classifiers indiscriminately absorb all statistical correlations labels in training data without distinguishing causal noncausal effects features. Rather than emphasizing features, graphs tend rely...
With the greater emphasis on privacy and security in our society, problem of graph unlearning -- revoking influence specific data trained GNN model, is drawing increasing attention. However, ranging from machine to recently emerged methods, existing efforts either resort retraining paradigm, or perform approximate erasure that fails consider inter-dependency between connected neighbors imposes constraints structure, therefore hard achieve satisfying performance-complexity trade-offs. In this...
Causal effect estimation from networked observational data encounters notable challenges, primarily hidden confounders arising network structure, or spillover effects that influence unit's outcomes based on neighboring treatment assignments. Existing graph neural (GNN)-based methods have endeavored to address these utilizing the GNN's message-passing mechanism capture model effects. However, they mainly focus transductive causal learning a single data, limiting their efficacy in inductive...
Knowledge graph (KG) demonstrates substantial potential for enhancing the performance of recommender systems. Due to its rich semantic content and associations among interactive entities, it can effectively alleviate inherent limitations in collaborative filtering (CF), such as data sparsity or cold-start issues. However, most existing knowledge-aware recommendation models indiscriminately aggregate all information KG, without considering specifically relevant task. Such indiscriminate...
With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training inference of GNNs become increasingly expensive. Existing network weight pruning algorithms cannot address main space computational bottleneck GNNs, caused by connectivity graph. To this end, paper first presents a unified GNN sparsification (UGS) framework that simultaneously prunes adjacency matrix model weights, for effectively accelerating on large-scale graphs. Leveraging new tool, we...
Graph representation learning on vast datasets, like web data, has made significant strides. However, the associated computational and storage overheads raise concerns. In sight of this, condensation (GCond) been introduced to distill these large real datasets into a more concise yet information-rich synthetic graph. Despite acceleration efforts, existing GCond methods mainly grapple with efficiency, especially expansive data graphs. Hence, in this work, we pinpoint two major inefficiencies...
Graph anomaly detection (GAD) has various applications in finance, healthcare, and security. Neural Networks (GNNs) are now the primary method for GAD, treating it as a task of semi-supervised node classification (normal vs. anomalous). However, most traditional GNNs aggregate average embeddings from all neighbors, without considering their labels, which can hinder detecting actual anomalies. To address this issue, previous methods try to selectively neighbors. same selection strategy is...
Invariant learning demonstrates substantial potential for enhancing the generalization of graph neural networks (GNNs) with out-of-distribution (OOD) data. It aims to recognize stable features in data classification, based on premise that these causally determine target label, and their influence is invariant changes distribution. Along this line, most studies have attempted pinpoint by emphasizing explicit substructures graph, such as masked or attentive subgraphs, primarily enforcing...
Recommender systems are crucial for personalizing user experiences but often depend on implicit feedback data, which can be noisy and misleading. Existing denoising studies involve incorporating auxiliary information or learning strategies from interaction data. However, they struggle with the inherent limitations of external knowledge as well non-universality certain predefined assumptions, hindering accurate noise identification. Recently, large language models (LLMs) have gained attention...
Deep generative adversarial networks (GANs) have gained growing popularity in numerous scenarios, while usually suffer from high parameter complexities for resource-constrained real-world applications. However, the compression of GANs has less been explored. A few works show that heuristically applying techniques normally leads to unsatisfactory results, due notorious training instability GANs. In parallel, lottery ticket hypothesis shows prevailing success on discriminative models, locating...
Media recommender systems aim to capture users' preferences and provide precise personalized recommendation of media content. There are two critical components in the common paradigm modern models: (1) representation learning, which generates an embedding for each user item; (2) interaction modeling, fits toward items based on their representations. In spite great success, when a amount users exist, it usually needs create, store, optimize huge table, where scale model parameters easily...
Adversarial training (AT) commonly serves as an advanced regularization to establish enhanced robust models. However, it usually scarifies performance on clean inputs, especially in complicated object detection and semantic segmentation tasks. how fully unleash the power of adversarial improve trade-off between standard robustness models, has not been explored. In this paper, we present Vertical Horizontal Training (VHAT) both input intermediate features, which consists two major components:...
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs. A promising solution is remove non-essential edges reduce the overheads GNN. Previous literature generally falls into two categories: topology-guided and semantic-guided. The former maintains certain topological properties yet often underperforms on GNNs due low integration with neural network training. latter performs well at lower sparsity faces...
Graph out-of-distribution (OOD) generalization remains a major challenge in graph learning since neural networks (GNNs) often suffer from severe performance degradation under distribution shifts. Invariant learning, aiming to extract invariant features across varied distributions, has recently emerged as promising approach for OOD generation. Despite the great success of problems Euclidean data (i.e., images), exploration within constrained by complex nature graphs. Existing studies, such...
In graph classification, the out-of-distribution (OOD) issue is attracting great attention. To address this issue, a prevailing idea to learn stable features, on assumption that they are substructures causally determining label and their relationship with distributional uncertainty. contrast, complementary parts termed environmental fail determine solely hold varying relationships label, thus ascribed possible reason for distribution shift. Existing generalization efforts mainly encourage...
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades, is widely used many areas including computing vision, natural language processing, time-series analysis, speech synthesis, etc. During age of deep learning, especially arise Large Language Models, a large majority researchers' attention paid on pursuing new state-of-the-art (SOTA) results, resulting ever increasing model size computational complexity. The...