Zhengyu Chen

ORCID: 0000-0002-9863-556X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Graph Neural Networks
  • Domain Adaptation and Few-Shot Learning
  • Recommender Systems and Techniques
  • Topic Modeling
  • Advanced Bandit Algorithms Research
  • Green IT and Sustainability
  • Digital Transformation in Industry
  • Natural Language Processing Techniques
  • Metabolomics and Mass Spectrometry Studies
  • Neural Networks and Applications
  • Graph Theory and Algorithms
  • Artificial Intelligence in Healthcare
  • Machine Learning in Materials Science
  • Speech and dialogue systems
  • Computational Drug Discovery Methods
  • Educational Technology and Assessment
  • Fault Detection and Control Systems
  • Industrial Vision Systems and Defect Detection
  • Manufacturing Process and Optimization
  • Anomaly Detection Techniques and Applications
  • Face and Expression Recognition
  • Traffic Prediction and Management Techniques
  • Text and Document Classification Technologies
  • Machine Learning and Data Classification
  • Welding Techniques and Residual Stresses

Meizu (China)
2024

Zhejiang University
2023-2024

Institute of Art
2024

Westlake University
2024

Device Model Generalization (DMG) is a practical yet under-investigated research topic for on-device machine learning applications. It aims to improve the generalization ability of pre-trained models when deployed on resource-constrained devices, such as improving performance cloud smart mobiles. While quite lot works have investigated data distribution shift across clouds and most them focus model fine-tuning personalized individual devices facilitate DMG. Despite their promising, these...

10.1145/3543507.3583451 article EN Proceedings of the ACM Web Conference 2022 2023-04-26

Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training data. The fundamental reason the severe degeneration is that most GNNs are designed based on I.I.D hypothesis. In such a setting, tend to exploit subtle statistical correlations in set predictions, even though it spurious correlation. this paper, we study problem of Out-Of-Distribution (OOD)...

10.1609/aaai.v38i8.28673 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Graph Contrastive Learning (GCL) has shown superior performance in representation learning graph-structured data. Despite their success, most existing GCL methods rely on prefabricated graph augmentation and homophily assumptions. Thus, they fail to generalize well heterophilic graphs where connected nodes may have different class labels dissimilar features. In this paper, we study the problem of conducting contrastive homophilic graphs. We find that can achieve promising simply by...

10.48550/arxiv.2310.18884 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Tackling the pervasive issue of data sparsity in recommender systems, we present an insightful investigation into burgeoning area non-overlapping cross-domain recommendation, a technique that facilitates transfer interaction knowledge across domains without necessitating inter-domain user/item correspondence. Existing approaches have predominantly depended on auxiliary information, such as user reviews and item tags, to establish connectivity, but these resources may become inaccessible due...

10.1145/3643807 article EN ACM transactions on office information systems 2024-02-01

This work studies the problem of learning unbiased algorithms from biased feedback for recommendation. We address this a novel distribution shift perspective. Recent works in recommendation have advanced state-of-the-art with various techniques such as re-weighting, multi-task learning, and meta-learning. Despite their empirical successes, most them lack theoretical guarantees, forming non-negligible gaps between theories recent algorithms. In paper, we propose understanding why existing...

10.1145/3580305.3599487 article EN Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2023-08-04

Deep learning has achieved tremendous success in recent years, but most of these successes are built on an independent and identically distributed (IID) assumption. This somewhat hinders the application deep to more challenging out-of-distribution (OOD) scenarios. Although many OOD methods have been proposed address this problem obtained good performance testing data that is major shifts with training distributions, interestingly, we experimentally find achieve excellent by making a great...

10.1109/iccv51070.2023.01095 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

Label-free metabolic dynamics contrast is highly appealing but difficult to achieve in biomedical imaging. Interference offers a sensitive mechanism for capturing the of subcellular scatterers. However, traditional interference detection methods fail isolate pure dynamics, as dynamic signals are coupled with scatterer reflectivity and other uncontrollable imaging factors. Here, we demonstrate active phase modulation-assisted full-field optical coherence tomography (APMD-FFOCT) that decouples...

10.48550/arxiv.2406.03798 preprint EN arXiv (Cornell University) 2024-06-06

GNNs are effective for semi-supervised learning tasks on graphs, but they can suffer from bias due to distribution shifts between training and testing node distributions. In this paper, we propose the Invariant Graph Neural Network (IGNN) address issue of in GNNs. Specifically, IGNN learns correlation invariant features different environments, where spurious changes environments. contains two components: graph partition component environments regularizes neural network learn representation...

10.1145/3587716.3587748 article EN 2023-02-17

In this paper, we study the problem of finding proper tradeoff for graph self-supervised learning. Recently, various auxiliary tasks have been proposed to accelerate representation learning in Graph Neural Networks (GNNs). However, existing ignores task conflicts between main and tasks. propose Pareto learning, a general framework that not only finds solution where rather than achieves best performance but more importantly, learns personalized different nodes. The method first formulates as...

10.1109/icassp48485.2024.10447557 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024-03-18

Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing primarily emphasizes importance adapting prompts specific tasks, rather than LLMs. However, good prompt is not solely defined by its wording, but also binds nature LLM in question. In this work, we first quantitatively demonstrate that different should be adapted LLMs enhance their capabilities across various downstream tasks...

10.48550/arxiv.2407.04118 preprint EN arXiv (Cornell University) 2024-07-04

Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training data. The cardinal impetus underlying the severe degeneration is that GNNs are architected predicated upon I.I.D assumptions. In such a setting, inclined to leverage imperceptible statistical correlations subsisting in set predict, albeit it spurious correlation. this paper, we study problem of...

10.48550/arxiv.2312.12475 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...