Luu Anh Tuan

ORCID: 0000-0001-6062-207X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Natural Language Processing Techniques
  • Multimodal Machine Learning Applications
  • Advanced Text Analysis Techniques
  • Domain Adaptation and Few-Shot Learning
  • Adversarial Robustness in Machine Learning
  • Sentiment Analysis and Opinion Mining
  • Expert finding and Q&A systems
  • Advanced Graph Neural Networks
  • Speech and dialogue systems
  • Privacy-Preserving Technologies in Data
  • Data Quality and Management
  • Semantic Web and Ontologies
  • Recommender Systems and Techniques
  • Video Analysis and Summarization
  • Emotion and Mood Recognition
  • Text and Document Classification Technologies
  • Cryptography and Data Security
  • Stochastic Gradient Optimization Techniques
  • Advanced Malware Detection Techniques
  • Educational Technology and Assessment
  • Complex Systems and Decision Making
  • Interpreting and Communication in Healthcare
  • Explainable Artificial Intelligence (XAI)
  • Biomedical Text Mining and Ontologies

Nanyang Technological University
2012-2024

VinUniversity
2023

National University of Singapore
2023

University College London
2023

Moscow Institute of Thermal Technology
2019-2020

Massachusetts Institute of Technology
2019

Institute for Infocomm Research
2016-2018

Agency for Science, Technology and Research
2017-2018

Nagaoka University of Technology
2012

This paper proposes a new neural architecture for collaborative ranking with implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning}) is novel metric learning approach recommendation. More specifically, instead of simple push-pull mechanisms between user and item pairs, we propose to learn latent relations that describe each interaction. helps alleviate the potential geometric inflexibility existing learing approaches. enables not only better performance but also...

10.1145/3178876.3186154 preprint EN 2018-01-01

Aspect-based sentiment analysis (ABSA) tries to predict the polarity of a given document with respect aspect entity. While neural network architectures have been successful in predicting overall sentences, aspect-specific still remains as an open problem. In this paper, we propose novel method for integrating information into model. More specifically, incorporate model by modeling word-aspect relationships. Our model, Aspect Fusion LSTM (AF-LSTM) learns attend based on associative...

10.1609/aaai.v32i1.12049 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2018-04-26

We describe a new deep learning architecture for to rank question answer pairs. Our approach extends the long short-term memory (LSTM) network with holographic composition model relationship between and representations. As opposed neural tensor layer that has been adopted recently, provides benefits of scalable rich representational without incurring huge parameter costs. Overall, we present Holographic Dual LSTM (HD-LSTM), unified both sentence modeling semantic matching. Essentially, our...

10.1145/3077136.3080790 article EN Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017-07-28

This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed question answering tasks, memNN operates via selection operation in which relevant pieces are adaptively selected based on the input query. In problem ABSA, this is analogous to aspects and documents relationship between each word document compared with aspect vector. standard networks, simple dot products or feed...

10.1145/3132847.3132936 article EN 2017-11-06

Deep learning has demonstrated tremendous potential for Automatic Text Scoring (ATS) tasks. In this paper, we describe a new neural architecture that enhances vanilla network models with auxiliary coherence features. Our method proposes SkipFlow mechanism relationships between snapshots of the hidden representations long short-term memory (LSTM) as it reads. Subsequently, semantic multiple are used features prediction. This two main benefits. Firstly, essays typically sequences and therefore...

10.1609/aaai.v32i1.12045 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2018-04-26

The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations mostly new interaction layers attention-based mechanisms, it seems to be a well-established fact these components mandatory for good performance. Unfortunately, the memory and computation cost incurred by mechanisms undesirable practical applications. As such, this paper tackles of whether is...

10.1145/3159652.3159664 preprint EN 2018-02-02

Automatic question generation can benefit many applications ranging from dialogue systems to reading comprehension. While questions are often asked with respect long documents, there challenges modeling such documents. Many existing techniques generate by effectively looking at one sentence a time, leading that easy and not reflective of the human process generation. Our goal is incorporate interactions across multiple sentences realistic for In order link broad document context target...

10.1609/aaai.v34i05.6440 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2020-04-03

Attention is typically used to select informative sub-phrases that are for prediction. This paper investigates the novel use of attention as a form feature augmentation, i.e, casted attention. We propose Multi-Cast Networks (MCAN), new mechanism and general model architecture potpourri ranking tasks in conversational modeling question answering domains. Our approach performs series soft operations, each time casting scalar upon inner word embeddings. The key idea provide real-valued hint...

10.1145/3219819.3220048 article EN 2018-07-19

10.1109/taslp.2024.3407571 article EN IEEE/ACM Transactions on Audio Speech and Language Processing 2024-01-01

This work empirically investigates punctuation insertions as adversarial attacks on NLP systems. Data from experiments three tasks, five datasets, and six models with four show that insertions, when limited to a few symbols (apostrophes hyphens), are superior attack vector compared character due 1) lower after-attack accuracy (Aaft-atk) than alphabetical insertions; 2) higher semantic similarity between the resulting original texts; 3) text is easier faster read assessed Test of Word Reading...

10.18653/v1/2023.findings-eacl.1 article EN cc-by 2023-01-01

Knowledge Base Question Answering (KBQA) aims to answer natural language questions with a large-scale structured knowledge base (KB). Despite advancements large models (LLMs), KBQA still faces challenges in weak KB awareness, imbalance between effectiveness and efficiency, high reliance on annotated data. To address these challenges, we propose KBQA-o1, novel agentic method Monte Carlo Tree Search (MCTS). It introduces ReAct-based agent process for stepwise logical form generation...

10.48550/arxiv.2501.18922 preprint EN arXiv (Cornell University) 2025-01-31

Many popular knowledge graphs such as Freebase, YAGO or DBPedia maintain a list of non-discrete attributes for each entity. Intuitively, these height, price population count are able to richly characterize entities in graphs. This additional source information may help alleviate the inherent sparsity and incompleteness problem that prevalent Unfortunately, many state-of-the-art relational learning models ignore this due challenging nature dealing with data types inherently binary-natured In...

10.1145/3132847.3132937 article EN 2017-11-06

Temporal gates play a significant role in modern recurrent-based neural encoders, enabling fine-grained control over recursive compositional operations time. In recurrent models such as the long short-term memory (LSTM), temporal amount of information retained or discarded time, not only playing an important influencing learned representations but also serving protection against vanishing gradients. This paper explores idea learning for sequence pairs (question and answer), jointly pairwise...

10.1609/aaai.v32i1.11973 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2018-04-27

We propose MRU (Multi-Range Reasoning Units), a new fast compositional encoder for machine comprehension (MC). Our proposed encoders are characterized by multi-ranged gating, executing series of parameterized contract-and-expand layers learning gating vectors that benefit from long and short-term dependencies. The aims our approach as follows: (1) representations concurrently aware context, (2) modeling relationships between intra-document blocks (3) efficient sequence encoding. show...

10.48550/arxiv.1803.09074 preprint EN other-oa arXiv (Cornell University) 2018-01-01

We propose DecaProp (Densely Connected Attention Propagation), a new densely connected neural architecture for reading comprehension (RC). There are two distinct characteristics of our model. Firstly, model connects all pairwise layers the network, modeling relationships between passage and query across hierarchical levels. Secondly, dense connectors in network learned via attention instead standard residual skip-connectors. To this end, we novel Bidirectional Connectors (BAC) efficiently...

10.48550/arxiv.1811.04210 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Federated learning (FL) enables multiple data owners to build machine models collaboratively without exposing their private local data. In order for FL achieve widespread adoption, it is important balance the need performance, privacy-preservation and interpretability, especially in mission critical applications such as finance healthcare. Thus, interpretable federated (IFL) has become an emerging topic of research attracting significant interest from academia industry alike. Its...

10.48550/arxiv.2302.13473 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Dialogue systems and large language models (LLMs) have gained considerable attention. However, the direct utilization of LLMs as task-oriented dialogue (TOD) has been found to underperform compared smaller task-specific models. Nonetheless, it is crucial acknowledge significant potential explore improved approaches for leveraging their impressive abilities. Motivated by goal LLMs, we propose an alternative approach called User-Guided Response Optimization (UGRO) combine with a TOD model....

10.1145/3583780.3615220 article EN 2023-10-21

Attention is typically used to select informative sub-phrases that are for prediction. This paper investigates the novel use of attention as a form feature augmentation, i.e, casted attention. We propose Multi-Cast Networks (MCAN), new mechanism and general model architecture potpourri ranking tasks in conversational modeling question answering domains. Our approach performs series soft operations, each time casting scalar upon inner word embeddings. The key idea provide real-valued hint...

10.48550/arxiv.1806.00778 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Emotion recognition in conversations (ERC) has gained more attention, where contextual information modeling and multimodal fusion have been the focus challenges recent years. In this paper, we proposed a Multi-Scale Receptive Field Graph model (MSRFG) to tackle of ERC. Specifically, MSRFG constructs multi-scale perception graphs learns via parallel receptive field paths. To compensate for deficiency temporal learning by graph network, injects dependencies into network relationships between...

10.1109/icassp49357.2023.10094596 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023-05-05

Mathematical questioning is crucial for assessing students' problem-solving skills. Since manually creating such questions requires substantial effort, automatic methods have been explored. Existing state-of-the-art models rely on fine-tuning strategies and struggle to generate that heavily involve multiple steps of logical arithmetic reasoning. Meanwhile, large language (LLMs) as ChatGPT excelled in many NLP tasks involving Nonetheless, their applications generating educational are...

10.1145/3605098.3636030 article EN cc-by Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing 2024-04-08

Large Language Models (LLMs), which bridge the gap between human language understanding and complex problem-solving, achieve state-of-the-art performance on several NLP tasks, particularly in few-shot zero-shot settings. Despite demonstrable efficacy of LLMs, due to constraints computational resources, users have engage with open-source models or outsource entire training process third-party platforms. However, research has demonstrated that are susceptible potential security...

10.36227/techrxiv.172832726.62863760/v1 preprint EN 2024-10-07

Dating and romantic relationships not only play a huge role in our personal lives but also collectively influence shape society. Today, many partnerships originate from the Internet, signifying importance of technology web modern dating. In this paper, we present text-based computational approach for estimating relationship compatibility two users on social media. Unlike previous works that propose reciprocal recommender systems online dating websites, devise distant supervision heuristic to...

10.1609/icwsm.v12i1.15007 article EN Proceedings of the International AAAI Conference on Web and Social Media 2018-06-15

Abstract Learning English grammar is a very challenging task for many students especially nonnative speakers. To learn well, it important to understand the concepts of with lots practise on exercise questions. Previous recommendation systems learning mainly focused recommending reading materials and vocabulary. Different from material vocabulary recommendations, question should recommend questions that have similar grammatical structure usage interest. The content similarity calculation...

10.1111/exsy.12244 article EN Expert Systems 2017-11-02
Coming Soon ...