Jialong Tang

ORCID: 0000-0003-1259-2931
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Natural Language Processing Techniques
  • Text and Document Classification Technologies
  • Data Quality and Management
  • Sentiment Analysis and Opinion Mining
  • Advanced Text Analysis Techniques
  • Multimodal Machine Learning Applications
  • Regional Economic and Spatial Analysis
  • Non-Invasive Vital Sign Monitoring
  • Economic theories and models
  • Recommender Systems and Techniques
  • Information Retrieval and Search Behavior
  • Seismology and Earthquake Studies
  • Advanced Aircraft Design and Technologies
  • Magnetic Bearings and Levitation Dynamics
  • Speech and dialogue systems
  • Power Systems and Technologies
  • Advanced Image and Video Retrieval Techniques
  • Speech and Audio Processing
  • Economic Growth and Productivity
  • Advanced Graph Neural Networks
  • Biomedical Text Mining and Ontologies
  • Blind Source Separation Techniques
  • Entrepreneurship Studies and Influences
  • Grey System Theory Applications

Sichuan Agricultural University
2024

University of Chinese Academy of Sciences
2019-2022

Southwest University
2022

Hohai University
2022

Chinese Academy of Sciences
2019-2021

Institute of Software
2019-2021

Suzhou Institute of Biomedical Engineering and Technology
2021

Xiamen University
2019

Shanghai Institute for Science of Science
2006-2012

Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, Shaoyi Chen. Proceedings of the 59th Annual Meeting Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.

10.18653/v1/2021.acl-long.217 article EN cc-by 2021-01-01

In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring importance each context word on given aspect. However, such a mechanism tends excessively focus few frequent words polarities, while ignoring infrequent ones. this paper, we propose progressive self-supervised learning approach ASC models, which automatically mines useful supervision information from training corpus refine mechanisms....

10.18653/v1/p19-1053 preprint EN cc-by 2019-01-01

Fine-tuning pretrained model has achieved promising performance on standard NER benchmarks. Generally, these benchmarks are blessed with strong name regularity, high mention coverage and sufficient context diversity. Unfortunately, when scaling to open situations, advantages may no longer exist. And therefore it raises a critical question of whether previous creditable approaches can still work well facing challenges. As there is currently available dataset investigate this problem, paper...

10.18653/v1/2020.emnlp-main.592 article EN cc-by 2020-01-01

Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, Jin Xu. Proceedings of the 59th Annual Meeting Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.

10.18653/v1/2021.acl-long.60 article EN cc-by 2021-01-01

We present systematic efforts in building long-context multilingual text representation model (TRM) and reranker from scratch for retrieval. first introduce a encoder (base size) enhanced with RoPE unpadding, pre-trained native 8192-token context (longer than 512 of previous encoders). Then we construct hybrid TRM cross-encoder by contrastive learning. Evaluations show that our outperforms the same-sized state-of-the-art XLM-R. Meanwhile, match performance large-sized BGE-M3 models achieve...

10.48550/arxiv.2407.19669 preprint EN arXiv (Cornell University) 2024-07-28

At present, the speed of reel soybean combine harvester is mainly adjusted manually, and basically does not have ability to automatically adjust according changes in crops or working speed. In maize-soybean strip intercropping mode, plants are densely planted. During harvesting, feeding rate increases, making it even more necessary growth status crops. This article uses PLC as main control platform design a system for existing harvester. Tests shown that can meet requirements control. The...

10.1038/s41598-024-73835-5 article EN cc-by-nc-nd Scientific Reports 2024-10-24

Traditional event coreference systems usually rely on pipeline framework and hand-crafted features, which often face error propagation problem have poor generalization ability. In this paper, we propose an End-to-End Event Coreference approach -- E3C neural network, can jointly model detection resolution tasks, learn to extract features from raw text automatically. Furthermore, because mentions are highly diversified is intricately governed by long-distance, semantic-dependent decisions, a...

10.48550/arxiv.2009.08153 preprint EN other-oa arXiv (Cornell University) 2020-01-01

In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring importance each context word on given aspect. However, such a mechanism tends excessively focus few frequent words polarities, while ignoring infrequent ones. this paper, we propose progressive self-supervised learning approach ASC models, which automatically mines useful supervision information from training corpus refine mechanisms....

10.48550/arxiv.1906.01213 preprint EN other-oa arXiv (Cornell University) 2019-01-01

One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is need for large labelled corpora. The diversity domain corpora and variety natural language expressions further exacerbate this problem. In paper, we propose a syntactic semantic-driven learning approach, which can learn models without any human-labelled data by leveraging semantic knowledge as noisier, higher-level supervision. Specifically, first employ patterns labelling functions pretrain base...

10.18653/v1/2020.findings-emnlp.69 article EN cc-by 2020-01-01

Event extraction is challenging due to the complex structure of event records and semantic gap between text event. Traditional methods usually extract by decomposing prediction task into multiple subtasks. In this paper, we propose Text2Event, a sequence-to-structure generation paradigm that can directly events from in an end-to-end manner. Specifically, design network for unified extraction, constrained decoding algorithm knowledge injection during inference, curriculum learning efficient...

10.48550/arxiv.2106.09232 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Fine-tuning pretrained model has achieved promising performance on standard NER benchmarks. Generally, these benchmarks are blessed with strong name regularity, high mention coverage and sufficient context diversity. Unfortunately, when scaling to open situations, advantages may no longer exist. And therefore it raises a critical question of whether previous creditable approaches can still work well facing challenges. As there is currently available dataset investigate this problem, paper...

10.48550/arxiv.2004.12126 preprint EN other-oa arXiv (Cornell University) 2020-01-01

To deal with the problems in nonlinear system, kernel adaptive filter (KAF) was proposed by processing data reproducing Hilbert space (RKHS). However, method dramatically improves amount of calculation filter, which limits its application practical problems. Furthermore, a critical factor large KAF computation is due to slow convergence speed, requires training participate calculation. If we can accelerate speed KAF, be reduced, thereby reducing computation. This paper proposes fast least...

10.1109/iet-iceta56553.2022.9971688 article EN 2022 IET International Conference on Engineering Technologies and Applications (IET-ICETA) 2022-10-14

RALMs (Retrieval-Augmented Language Models) broaden their knowledge scope by incorporating external textual resources. However, the multilingual nature of global necessitates to handle diverse languages, a topic that has received limited research focus. In this work, we propose \textit{Futurepedia}, carefully crafted benchmark containing parallel texts across eight representative languages. We evaluate six using our explore challenges RALMs. Experimental results reveal linguistic...

10.48550/arxiv.2410.21970 preprint EN arXiv (Cornell University) 2024-10-29

Whether and how language models (LMs) acquire the syntax of natural languages has been widely evaluated under minimal pair paradigm. However, a lack wide-coverage benchmarks in other than English constrained systematic investigations into issue. Addressing it, we first introduce ZhoBLiMP, most comprehensive benchmark linguistic pairs for Chinese to date, with 118 paradigms, covering 15 phenomena. We then train 20 LMs different sizes (14M 1.4B) on corpora various volumes (100M 3B tokens)...

10.48550/arxiv.2411.06096 preprint EN arXiv (Cornell University) 2024-11-09

Procedural text understanding requires machines to reason about entity states within the dynamical narratives. Current procedural approaches are commonly entity-wise, which separately track each and independently predict different of entity. Such an entity-wise paradigm does not consider interaction between entities their states. In this paper, we propose a new scene-wise for understanding, jointly tracks all in scene-by-scene manner. Based on paradigm, Scene Graph Reasoner (SGR), introduces...

10.1609/aaai.v36i10.21388 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2022-06-28

Event schema provides a conceptual, structural and formal language to represent events model the world event knowledge. Unfortunately, it is challenging automatically induce high-quality high-coverage schemas due open nature of real-world events, diversity expressions, sparsity In this paper, we propose new paradigm for induction -- knowledge harvesting from large-scale pre-trained models, which can effectively resolve above challenges by discovering, conceptualizing structuralizing PLMs....

10.48550/arxiv.2305.07280 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...