Yuqing Tang

ORCID: 0000-0002-5919-1804
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Logic, Reasoning, and Knowledge
  • Topic Modeling
  • Natural Language Processing Techniques
  • Multi-Agent Systems and Negotiation
  • Matrix Theory and Algorithms
  • Access Control and Trust
  • Cognitive Science and Mapping
  • AI-based Problem Solving and Planning
  • Electromagnetic Scattering and Analysis
  • Speech Recognition and Synthesis
  • Multimodal Machine Learning Applications
  • Time Series Analysis and Forecasting
  • Stock Market Forecasting Methods
  • Semantic Web and Ontologies
  • Stochastic Gradient Optimization Techniques
  • Bayesian Modeling and Causal Inference
  • Complex Systems and Decision Making
  • Electromagnetic Compatibility and Noise Suppression
  • Parkinson's Disease Mechanisms and Treatments
  • Adversarial Robustness in Machine Learning
  • Rough Sets and Fuzzy Logic
  • Functional Brain Connectivity Studies
  • Neurological disorders and treatments
  • Complex Systems and Time Series Analysis
  • Speech and dialogue systems

Hunan University of Traditional Chinese Medicine
2025

Shanghai Jiao Tong University
2023-2024

Second Xiangya Hospital of Central South University
2023-2024

Central South University
2023-2024

Beijing Normal University
2021-2024

Shanghai Polytechnic University
2022-2024

Soochow University
2024

Second Military Medical University
2023

Changhai Hospital
2023

University of Edinburgh
2023

Recent work demonstrates the potential of multilingual pretraining creating one model that can be used for various tasks in different languages. Previous has demonstrated machine translation systems created by finetuning on bitext. In this work, we show models through finetuning. Instead direction, a pretrained is finetuned many directions at same time. Compared to trained from scratch, starting incorporates benefits large quantities unlabeled monolingual data, which particularly important...

10.48550/arxiv.2008.00401 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. Proceedings of the 59th Annual Meeting Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.

10.18653/v1/2021.acl-long.68 article EN cc-by 2021-01-01

Recent work demonstrates the potential of training one model for multilingual machine translation.In parallel, denoising pretraining using unlabeled monolingual data as a starting point finetuning bitext translation systems has demonstrated strong performance gains.However, little been explored on to combine with in single model.In this work, we fill gap by studying how models can be created through finetuning.Fintuning from pretrained incorporates benefits large quantities data, which is...

10.18653/v1/2021.findings-acl.304 article EN cc-by 2021-01-01

The existing long-term time-series forecasting methods based on the neural networks suffer from multiple limitations, such as accumulated errors and diminishing temporal correlation, which compromise prediction quality. To overcome these shortcomings, in this article, we build trend fuzzy granulation-based long short-term memory (LSTM) to carry out forecasting, where data points with consistent characteristics, including change, fluctuation range, persistence, are predicted unison rather...

10.1109/tfuzz.2021.3062723 article EN IEEE Transactions on Fuzzy Systems 2021-03-01

In this paper, we describe our submission to the WMT19 low-resource parallel corpus filtering shared task. Our main approach is based on LASER toolkit (Language-Agnostic SEntence Representations), which uses an encoder-decoder architecture trained a obtain multilingual sentence representations. We then use representations directly score and filter noisy sentences without additionally training scoring function. contrast other promising methods show that yields strong results. Finally, produce...

10.18653/v1/w19-5435 article EN 2019-01-01

One drawback of using the existing one-step forecasting models for long-term time series prediction is cumulative errors caused by iterations. In order to overcome this shortcoming, article proposes a trend-fuzzy-granulation-based adaptive fuzzy cognitive map (FCM) forecasting. Different from original FCM-based models, class trend information granules built represent trend, fluctuation range, and persistence various segments series, which are more instrumental comprehensive than simple...

10.1109/tfuzz.2022.3169624 article EN IEEE Transactions on Fuzzy Systems 2022-04-26

The aim of this study is to investigate differences in gray matter volume and cortical complexity between Parkinson's disease with depression (PDD) patients without (PDND) patients.

10.1111/cns.14582 article EN cc-by CNS Neuroscience & Therapeutics 2024-02-01

Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models. In this work, we found that can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs. We utilized these findings to develop a new approach -- retrieval for iterative self-supervised (CRISS), where mining and processes are applied iteratively, improving translation at same time. Using method, achieved state-of-the-art unsupervised...

10.48550/arxiv.2006.09526 preprint EN other-oa arXiv (Cornell University) 2020-01-01

10.1109/icassp49660.2025.10887738 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025-03-12

For the study of electromagnetic interference (EMI) mechanisms in pulsewidth-modulated inverters, this paper presents an analysis approach based on empirical models inverter components and their associated various parasitics. The power switching devices were modeled with a physics-based device modeling technique. Using time-domain reflectometry, authors characterized major parasitics modules, passive components, cables, leads interconnects. Simulations full-bridge insulated gate bipolar...

10.1109/28.952514 article EN IEEE Transactions on Industry Applications 2001-01-01

For the purpose of investigation electromagnetic interference (EMI) mechanisms in a PWM inverter, empirical models and comparative experiments were studied both time-domain frequency domain. Models major circuit components including switching devices, passive interconnects obtained with physics-based device modeling reflectometry (TDR) for parasitics characterization. A full-bridge inverter was then constructed all included to study fundamental by which EMI noises are excited propagated....

10.1109/pesc.1999.785612 article EN 2003-01-20

Journal Article Using argumentation to reason about trust and belief Get access Yuqing Tang, Tang Department of Computer Science, Graduate Center, City University New York, 365 Fifth Avenue, NY 10016, USA.E-mail: ytang@gc.cuny.edu; kcai@gc.cuny.edu Search for other works by this author on: Oxford Academic Google Scholar Kai Cai, Cai Peter McBurney, McBurney Informatics, Kings College London, The Strand, WC2R 2LS, UK.Email: peter.mcburney@kcl.ac.uk Elizabeth Sklar, Sklar USA; & Information...

10.1093/logcom/exr038 article EN Journal of Logic and Computation 2011-11-15

For the purpose of investigation electromagnetic interference (EMI) mechanisms in hard- and soft-switching PWM inverters, empirical models comparative experiments were studied both time-domain frequency-domain. Models major circuit components including switching devices, passive interconnects obtained with physics-based device modeling reflectometry (TDR) for parasitics characterization. The inverter simulation was then constructed using all models, results compared those from prototype to...

10.1109/ias.1999.805995 article EN 2003-01-20

Although the idea of information granulation has been shown to be a research craze in short-term time series forecasting, it is still urgent develop granular framework so that can characterize trend distribution data significant extent under common concept time. This article puts forward novel algorithm involving two-stage partitioning scheme granule established there exhibits well-articulated semantics at level, while same time, gives full consideration varying patterns data. On this basis,...

10.1109/tfuzz.2021.3113762 article EN IEEE Transactions on Fuzzy Systems 2021-09-20

Abstract Objective This study explores the correlation between asymmetrical brain functional activity, gray matter asymmetry, and severity of early‐stage Parkinson's disease (PD). Methods Ninety‐three PD patients (ePD, H‐Y stages 1–2.5) were recruited, divided into 47 mild (ePD‐mild, 1–1.5) 46 moderate (ePD‐moderate, 2–2.5) cases, alongside 43 matched healthy controls (HCs). The employed Hoehn Yahr (H‐Y) staging system for assessment utilized voxel‐mirrored homotopic connectivity (VMHC)...

10.1111/cns.14874 article EN cc-by CNS Neuroscience & Therapeutics 2024-07-01

Xiang Kong, Adithya Renduchintala, James Cross, Yuqing Tang, Jiatao Gu, Xian Li. Proceedings of the 16th Conference European Chapter Association for Computational Linguistics: Main Volume. 2021.

10.18653/v1/2021.eacl-main.138 preprint EN cc-by 2021-01-01

10.1016/j.laa.2009.10.020 article EN publisher-specific-oa Linear Algebra and its Applications 2009-11-12

In any group of agents, trust plays an important role. The degree to which agents one another will inform what they believe, and, as a result the reasoning that perform and conclusions come when involves information from other agents. this paper we consider with varying degrees each other, examine combinations argumentation-based can carry out. question seek answer is What relationship between agent has in it draw using agent?, show there are range answers depending upon way deal trust.

10.5555/2031678.2031743 article EN Adaptive Agents and Multi-Agents Systems 2011-05-02

We present a simple yet effective approach to build multilingual speech-to-text (ST) translation by efficient transfer learning from pretrained speech encoder and text decoder. Our key finding is that minimalistic LNA (LayerNorm Attention) finetuning can achieve zero-shot crosslingual cross-modality ability only less than 10% of the parameters. This enables effectively leveraging large models with low training cost. Using wav2vec 2.0 for acoustic modeling, mBART generation, our advanced new...

10.48550/arxiv.2010.12829 preprint EN other-oa arXiv (Cornell University) 2020-01-01
Coming Soon ...