Yucheng Huang

ORCID: 0000-0002-3262-0963
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Video Surveillance and Tracking Methods
  • Topic Modeling
  • Natural Language Processing Techniques
  • Impact of Light on Environment and Health
  • Text and Document Classification Technologies
  • Domain Adaptation and Few-Shot Learning
  • IoT-based Smart Home Systems
  • Air Quality Monitoring and Forecasting
  • Advanced Vision and Imaging
  • Gaze Tracking and Assistive Technology
  • Human Pose and Action Recognition
  • Biomedical Text Mining and Ontologies

Xinjiang University
2023-2024

Xi'an Jiaotong University
2022-2023

Biomedical relation extraction seeks to automatically extract biomedical relations from text, which plays an important role in studies. However, constructing high-quality annotation data is not only time-consuming but also requires a high level of knowledge the field. To alleviate this problem, Semi-supervised Relation Extraction aims facts limited labeled and more readily available unlabeled samples. Existing works can be roughly categorized as self-training methods self-ensembling methods....

10.1109/bibm55620.2022.9995416 article EN 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2022-12-06

Zero-resource cross-lingual named entity recognition (ZRCL-NER) aims to leverage rich labeled source language data address the NER problem in zero-resource target language. Existing methods are built either based on transfer or representation transfer. However, former usually leads additional computation costs, and latter lacks explicit optimization specific task. To overcome above limitations, we propose a novel prototype-based alignment model (PRAM) for challenging ZRCL-NER PRAM models...

10.18653/v1/2023.findings-acl.201 article EN cc-by Findings of the Association for Computational Linguistics: ACL 2022 2023-01-01

Abstract The use of deep neural networks has revolutionised object tracking tasks, and Siamese trackers have emerged as a prominent technique for this purpose. Existing fixed template or updating technique, but it is prone to overfitting, lacks the capacity exploit global temporal sequences, cannot utilise multi‐layer features. As result, challenging deal with dramatic appearance changes in complicated scenarios. also struggle learn background information, which impairs their discriminative...

10.1049/cvi2.12213 article EN cc-by-nc-nd IET Computer Vision 2023-06-19

Tracking tasks based on deep neural networks have greatly improved with the emergence of Siamese trackers. However, appearance targets often changes during tracking, which can reduce robustness tracker when facing challenges such as aspect ratio change, occlusion, and scale variation. In addition, cluttered backgrounds lead to multiple high response points in map, leading incorrect target positioning. this paper, we introduce two transformer-based modules improve tracking called DASTSiam:...

10.48550/arxiv.2301.09063 preprint EN other-oa arXiv (Cornell University) 2023-01-01

<p>Point clouds serve as a vital component in computer vision and robotics, enabling the representation processing of three-dimensional data. However, their utility is often limited by significant variations data across domains, impeding transfer knowledge between different scenarios. Beside, existing approaches implement domain adaptation on Point only with single source setting. To address this issue, we conduct first investigation multi-source for point using...

10.36227/techrxiv.23306399.v1 preprint EN cc-by 2023-06-07

<p>Point clouds serve as a vital component in computer vision and robotics, enabling the representation processing of three-dimensional data. However, their utility is often limited by significant variations data across domains, impeding transfer knowledge between different scenarios. Beside, existing approaches implement domain adaptation on Point only with single source setting. To address this issue, we conduct first investigation multi-source for point using...

10.36227/techrxiv.23306399 preprint EN cc-by 2023-06-07
Coming Soon ...