Chengsong Huang

ORCID: 0009-0009-1812-4515
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Catalytic Processes in Materials Science
  • Topic Modeling
  • Catalysis and Oxidation Reactions
  • Natural Language Processing Techniques
  • Knowledge Management and Sharing
  • Catalysts for Methane Reforming
  • Industrial Gas Emission Control
  • Technology Adoption and User Behaviour
  • Digital Marketing and Social Media
  • Catalysis and Hydrodesulfurization Studies
  • Multimodal Machine Learning Applications
  • Adversarial Robustness in Machine Learning
  • Handwritten Text Recognition Techniques
  • Semantic Web and Ontologies
  • Ammonia Synthesis and Nitrogen Reduction
  • Open Source Software Innovations
  • Text and Document Classification Technologies
  • Advanced Neural Network Applications
  • Complex Network Analysis Techniques
  • Digital Media Forensic Detection
  • Currency Recognition and Detection
  • Context-Aware Activity Recognition Systems
  • Credit Risk and Financial Regulations
  • Robotics and Automated Systems
  • Nanomaterials for catalytic reactions

Sichuan University
2022-2025

Fudan University
2021-2023

Wuhan University
2017-2020

Wuhan Branch of the National Science Library
2017

Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong Huang, Wenhao Yanghua Xiao. Proceedings of the 59th Annual Meeting Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.

10.18653/v1/2021.acl-long.277 article EN cc-by 2021-01-01

Language models (LMs) have demonstrated their capability in possessing commonsense knowledge of the physical world, a crucial aspect performing tasks everyday life. However, it remains unclear whether they capacity to generate grounded, executable plans for embodied tasks. This is challenging task as LMs lack ability perceive environment through vision and feedback from environment. In this paper, we address important research question present first investigation into topic. Our novel...

10.1609/aaai.v37i11.26549 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2023-06-26

Large pre-trained language models (PLMs) have proven to be a crucial component of modern natural processing systems. PLMs typically need fine-tuned on task-specific downstream datasets, which makes it hard claim the ownership and protect developer's intellectual property due catastrophic forgetting phenomenon. We show that can watermarked with multi-task learning framework by embedding backdoors triggered specific inputs defined owners, those watermarks are remove even though multiple tasks....

10.48550/arxiv.2210.07543 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Purpose This study aims to explore the mechanisms of extended information seeking, which is combination technologies (IT) use behavior and information-seeking behavior. The purpose identify factors that shape seeking from perspectives psychological empowerment attachment. Design/methodology/approach A research model was developed based on prior theory literature. Survey data were collected, partial least squares (PLS) structural equation modeling used verify model. Findings Psychological a...

10.1108/ajim-08-2019-0213 article EN Aslib Journal of Information Management 2020-09-02

Purpose Microblogging as one kind of social media application provides an important information sharing platform. Adaptive is the combination adaptive technologies (IT) use behavior and subsequently refers to IT oriented sharing. The purpose this paper understand in context microblogging from perspective cognitive switching. Design/methodology/approach A research model was developed survey data were collected. partial least squares structural equation modeling employed verify model. Findings...

10.1108/ajim-07-2018-0176 article EN Aslib Journal of Information Management 2019-06-18

While Large Language Models (LLMs) have demonstrated proficiency in handling complex queries, much of the past work has depended on extensively annotated datasets by human experts. However, this reliance fully-supervised annotations poses scalability challenges, particularly as models and data requirements grow. To mitigate this, we explore potential enhancing LLMs' reasoning abilities with minimal supervision. In work, introduce self-reinforcement, which begins Supervised Fine-Tuning (SFT)...

10.48550/arxiv.2405.04086 preprint EN arXiv (Cornell University) 2024-05-07

Abstract The goal of ResearchGate (RG) is to help users exchange scholarly information around the world. This study drew on adaptive structuration theory (AST) investigate social structure RG, which had been largely overlooked by prior research. Data were crawled from RG and results presented based content analysis. For embedded in most frequent updates structural features spirit occurred first two years. Six representative for analyzed newly structures presented. emerging using more willing...

10.1515/libri-2019-0011 article EN Libri 2019-09-25

Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability cross-task generalization and introduces LoraHub, a simple framework devised the purposive assembly of modules trained on diverse given tasks, with objective achieving adaptable performance unseen With just few examples from task, LoraHub can fluidly combine multiple modules, eliminating need human expertise assumptions. Notably, composition...

10.48550/arxiv.2307.13269 preprint EN cc-by-sa arXiv (Cornell University) 2023-01-01

Tables are widely used in research and business, which suitable for human consumption, but not easily machine-processable, particularly when tables present images.One of the main challenges to extracting data from images is accurately recognizing table structures, especially complex with cross rows columns.In this study, we propose a novel multi-modal pre-training model structure recognition, named TableVLM.With two-stream transformer-based encoder-decoder architecture, TableVLM learns...

10.18653/v1/2023.acl-long.137 article EN cc-by 2023-01-01

The activity of methane oxidation on the Pd/10YZ (10 wt.% Yttria-stabilized Zirconia) catalyst was significantly improved by pretreatment with a simulated emission gas mixture. 90% conversion temperature T90 pre-treated Pd/10YZ-P decreased 49 °C. It found that increased content PdO and active oxygen species surface. Meanwhile, larger crystalline particles were formed after pretreatment. This enhanced reducing property catalyst, resulting in significant improvement activity. exhaust...

10.1246/cl.230062 article EN Chemistry Letters 2023-05-05

The World Wide Web contains rich up-to-date information for knowledge graph construction. However, most current relation extraction techniques are designed free text and thus do not handle well semi-structured web content. In this paper, we propose a novel multi-phase machine reading framework, called WebKE. It processes the content on different granularity by first detecting areas of interest at DOM tree node level then extracting relational triples each area. We also HTMLBERT as an encoder...

10.1145/3459637.3482491 article EN 2021-10-26

This work designed a series of three-way catalysts (TWCs) with and without the modification by La0.67Fe0.83Cu0.17O3 (LaFeCu) perovskite to reduce formation N2O NH3 emissions from natural gas vehicles (NGVs). The modified PtRh catalyst was found have an efficient TWC performance significantly. X-ray diffraction CO-Fourier transform infrared results demonstrated Rhx species, which would NCO*, main precursor NH3. In situ diffuse reflectance Fourier spectroscopy used confirm that no NCO* groups...

10.1021/acs.iecr.2c02479 article EN Industrial & Engineering Chemistry Research 2022-10-13

The sulfur species present in natural gas fuel have a significantly negative impact on the performance and lifetime of three-way catalysts. A reliable regeneration method is needed for catalyst durability. In this paper, we investigated model Pd/La2O3–Al2O3 four typical working atmospheres. According to findings, high temperature or rich-burn conditions were essential prevent from being degraded by poisoning, which recovered conversion CH4 below 80% above 90%. It was confirmed DRIFTS that...

10.1021/acs.iecr.3c02955 article EN Industrial & Engineering Chemistry Research 2023-11-20

Large pre-trained language models (PLMs) have achieved remarkable success, making them highly valuable intellectual property due to their expensive training costs. Consequently, model watermarking, a method developed protect the of neural models, has emerged as crucial yet underexplored technique. The problem watermarking PLMs remained unsolved since parameters will be updated when fine-tuned on downstream datasets, and then embedded watermarks could removed easily catastrophic forgetting...

10.18653/v1/2023.findings-emnlp.239 article EN cc-by 2023-01-01

Palladium (Pd)-based catalyst is one of the most effective catalysts for removing low-concentration CH4, a strong greenhouse effect gas. The doping magnesium (Mg) promoter an method to improve performance Pd/Al2O3-based catalysts. Mg in form MgO generates electronrich PdO due electron donating ability, whereas MgAl2O4 spinel improves reducibility moderate interaction. sintering palladium particles under long-term highly humid conditions causes catalytic deactivation. In utilization,...

10.2139/ssrn.4744965 preprint EN 2024-01-01

Foundation models, such as Large Language Models (LLMs) or Vision (LVMs), have emerged one of the most powerful tools in respective fields. However, unlike text and image data, graph data do not a definitive structure, posing great challenges to developing Graph Model (GFM). For example, current attempts at designing general models either transform into language format for LLM-based prediction still train GNN model with LLM an assistant. The former can handle unlimited tasks, while latter...

10.48550/arxiv.2407.09709 preprint EN arXiv (Cornell University) 2024-07-12

In-Context Learning (ICL) emerges as a key feature for Large Language Models (LLMs), allowing them to adapt new tasks by leveraging task-specific examples without updating model parameters. However, ICL faces challenges with increasing numbers of due performance degradation and quadratic computational costs. In this paper, we propose Logit Arithmetic Reweighting Approach (LARA), novel framework that enhances using logit-based ensembling multiple demonstrations. Our approach divides long...

10.48550/arxiv.2410.10074 preprint EN arXiv (Cornell University) 2024-10-13

Language model calibration refers to the alignment between confidence of and actual performance its responses. While previous studies point out overconfidence phenomenon in Large Models (LLMs) show that LLMs trained with Reinforcement Learning from Human Feedback (RLHF) are overconfident a more sharpened output probability, this study, we reveal RLHF tends lead models express verbalized their own We investigate underlying cause demonstrate reward used for Proximal Policy Optimization (PPO)...

10.48550/arxiv.2410.09724 preprint EN arXiv (Cornell University) 2024-10-13
Coming Soon ...