Haoxin Zhang

ORCID: 0009-0002-1468-7539
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Text and Document Classification Technologies
  • Advanced Text Analysis Techniques
  • Network Security and Intrusion Detection
  • Hemoglobinopathies and Related Disorders
  • Digital Marketing and Social Media
  • Advanced Data Processing Techniques
  • Recommender Systems and Techniques
  • Handwritten Text Recognition Techniques
  • Neural Networks and Applications
  • Natural Language Processing Techniques
  • Gamma-ray bursts and supernovae
  • Astrophysical Phenomena and Observations
  • Metabolism and Genetic Disorders
  • Multimodal Machine Learning Applications
  • Neonatal Health and Biochemistry
  • Death Anxiety and Social Exclusion
  • Pulsars and Gravitational Waves Research
  • Cultural Differences and Values

Shandong University of Science and Technology
2024

Zhengzhou People's Hospital
2024

Institute of High Energy Physics
2020

People enjoy sharing "notes" including their experiences within online communities. Therefore, recommending notes aligned with user interests has become a crucial task. Existing methods only input into BERT-based models to generate note embeddings for assessing similarity. However, they may underutilize some important cues, e.g., hashtags or categories, which represent the key concepts of notes. Indeed, learning hashtags/categories can potentially enhance embeddings, both compress...

10.1145/3589335.3648314 article EN 2024-05-12

People enjoy sharing "notes" including their experiences within online communities. Therefore, recommending notes aligned with user interests has become a crucial task. Existing methods only input into BERT-based models to generate note embeddings for assessing similarity. However, they may underutilize some important cues, e.g., hashtags or categories, which represent the key concepts of notes. Indeed, learning hashtags/categories can potentially enhance embeddings, both compress...

10.48550/arxiv.2403.01744 preprint EN arXiv (Cornell University) 2024-03-04

Large Language Models (LLMs) have demonstrated exceptional text understanding. Existing works explore their application in embedding tasks. However, there are few utilizing LLMs to assist multimodal representation In this work, we investigate the potential of enhance item-to-item (I2I) recommendations. One feasible method is transfer Multimodal (MLLMs) for pre-training MLLMs usually requires collecting high-quality, web-scale data, resulting complex training procedures and high costs. This...

10.48550/arxiv.2405.16789 preprint EN arXiv (Cornell University) 2024-05-26

Dense retrieval in most industries employs dual-tower architectures to retrieve query-relevant documents. Due online deployment requirements, existing real-world dense systems mainly enhance performance by designing negative sampling strategies, overlooking the advantages of scaling up. Recently, Large Language Models (LLMs) have exhibited superior that can be leveraged for up retrieval. However, models significantly increases query latency. To address this challenge, we propose ScalingNote,...

10.48550/arxiv.2411.15766 preprint EN arXiv (Cornell University) 2024-11-24
Coming Soon ...