Yunchang Zhu

ORCID: 0000-0003-3766-0275
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Natural Language Processing Techniques
  • Advanced Text Analysis Techniques
  • Information Retrieval and Search Behavior
  • Multimodal Machine Learning Applications
  • Expert finding and Q&A systems
  • Risk and Safety Analysis
  • Neural Networks and Applications
  • Advanced Image and Video Retrieval Techniques
  • Cancer-related molecular mechanisms research
  • Generative Adversarial Networks and Image Synthesis
  • Medical Image Segmentation Techniques
  • Machine Learning and Algorithms
  • RNA modifications and cancer
  • Circular RNAs in diseases
  • Domain Adaptation and Few-Shot Learning
  • Advanced Neural Network Applications
  • Advanced Data Processing Techniques

Institute of Computing Technology
2020-2024

University of Chinese Academy of Sciences
2020-2024

Chinese Academy of Sciences
2024

Xinyang Normal University
2022

Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven be effective complex questions, by recursively retrieving new at each step. However, almost all existing use predefined strategies, either applying the same retrieval function multiple times or fixing order of different functions, which cannot fulfill diverse requirements various questions. In this paper, we propose...

10.18653/v1/2021.emnlp-main.293 article EN cc-by Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021-01-01

The abductive natural language inference task (αNLI) is proposed to evaluate the reasoning ability of a learning system. In αNLI task, two observations are given and most plausible hypothesis asked pick out from candidates. Existing methods simply formulate it as classification problem, thus cross-entropy log-loss objective used during training. However, discriminating true false does not measure plausibility hypothesis, for all hypotheses have chance happen, only probabilities different. To...

10.1145/3397271.3401332 article EN 2020-07-25

Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some are redundant, noise mixed tends to distract model. Previous work mainly focuses extrinsically reducing low-utility by additional post- or pre-processing, such as network pruning...

10.1145/3652599 article EN cc-by ACM transactions on office information systems 2024-03-15

Pseudo-relevance feedback (PRF) has proven to be an effective query reformulation technique improve retrieval accuracy. It aims alleviate the mismatch of linguistic expressions between a and its potential relevant documents. Existing PRF methods independently treat revised queries originating from same but using different numbers documents, resulting in severe drift. Without comparing effects two revisions query, model may incorrectly focus on additional irrelevant information increased more...

10.1145/3477495.3532017 article EN Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022-07-06

Vision-language alignment in Large Vision-Language Models (LVLMs) successfully enables LLMs to understand visual input. However, we find that existing vision-language methods fail transfer the safety mechanism for text vision, which leads vulnerabilities toxic image. To explore cause of this problem, give insightful explanation where and how LVLMs operates conduct comparative analysis between vision. We hidden states at specific transformer layers play a crucial role successful activation...

10.48550/arxiv.2410.12662 preprint EN arXiv (Cornell University) 2024-10-16

Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven be effective complex questions, by recursively retrieving new at each step. However, almost all existing use predefined strategies, either applying the same retrieval function multiple times or fixing order of different functions, which cannot fulfill diverse requirements various questions. In this paper, we propose...

10.48550/arxiv.2109.06747 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some are redundant, noise mixed tends to distract model. Previous work mainly focuses extrinsically reducing low-utility by additional post- or pre-processing, such as network pruning...

10.48550/arxiv.2301.03765 preprint EN cc-by arXiv (Cornell University) 2023-01-01

The abductive natural language inference task ($\alpha$NLI) is proposed to evaluate the reasoning ability of a learning system. In $\alpha$NLI task, two observations are given and most plausible hypothesis asked pick out from candidates. Existing methods simply formulate it as classification problem, thus cross-entropy log-loss objective used during training. However, discriminating true false does not measure plausibility hypothesis, for all hypotheses have chance happen, only probabilities...

10.48550/arxiv.2005.11223 preprint EN cc-by arXiv (Cornell University) 2020-01-01

Pseudo-relevance feedback (PRF) has proven to be an effective query reformulation technique improve retrieval accuracy. It aims alleviate the mismatch of linguistic expressions between a and its potential relevant documents. Existing PRF methods independently treat revised queries originating from same but using different numbers documents, resulting in severe drift. Without comparing effects two revisions query, model may incorrectly focus on additional irrelevant information increased more...

10.48550/arxiv.2204.11545 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Buffalo meat is of good quality because it lean and tender, could bring significant cardiovascular benefits. The underlying difference in muscle development a complex precisely orchestrated process which has been demonstrated to be regulated by long non-coding RNAs (lncRNAs). However, the regulatory role lncRNAs growth buffalo skeletal still unclear. In this study, Ribo-Zero RNA-Seq method was used explore lncRNA expression profiles myoblasts during proliferation differentiation phases. A...

10.3389/fvets.2022.857044 article EN cc-by Frontiers in Veterinary Science 2022-08-05
Coming Soon ...