Devamanyu Hazarika

ORCID: 0000-0002-0241-7163
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Sentiment Analysis and Opinion Mining
  • Natural Language Processing Techniques
  • Advanced Text Analysis Techniques
  • Emotion and Mood Recognition
  • Multimodal Machine Learning Applications
  • Domain Adaptation and Few-Shot Learning
  • Speech Recognition and Synthesis
  • Speech and dialogue systems
  • Advanced Neural Network Applications
  • Humor Studies and Applications
  • Social Robot Interaction and HRI
  • Brain Tumor Detection and Classification
  • Text Readability and Simplification
  • Speech and Audio Processing
  • Advanced Image and Video Retrieval Techniques
  • Video Analysis and Summarization
  • Medical Image Segmentation Techniques
  • Anomaly Detection Techniques and Applications
  • Language, Metaphor, and Cognition
  • Advanced Graph Neural Networks
  • Persona Design and Applications
  • Complex Network Analysis Techniques
  • Human Pose and Action Recognition
  • Computational and Text Analysis Methods

National University of Singapore
2017-2023

Amazon (United States)
2020-2023

Hong Kong University of Science and Technology
2023

University of Hong Kong
2023

Amazon (Germany)
2021-2023

Mongolia International University
2023

RIKEN Center for Advanced Intelligence Project
2023

Singapore University of Technology and Design
2020-2022

University of Michigan
2019

Institute of Electrical and Electronics Engineers
2018

Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety model designs blossomed the context natural language (NLP). In this paper, we review significant deep related models that been employed for numerous NLP tasks provide walk-through their evolution. We also summarize, compare contrast various put forward detailed understanding past, present future NLP.

10.1109/mci.2018.2840738 article EN IEEE Computational Intelligence Magazine 2018-07-20

Deep learning methods employ multiple processing layers to learn hierarchical representations of data and have produced state-of-the-art results in many domains. Recently, a variety model designs blossomed the context natural language (NLP). In this paper, we review significant deep related models that been employed for numerous NLP tasks provide walk-through their evolution. We also summarize, compare contrast various put forward detailed understanding past, present future NLP.

10.48550/arxiv.1708.02709 preprint EN cc-by-sa arXiv (Cornell University) 2017-01-01

Emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications. Until now, however, large-scale multimodal multi-party emotional conversational database containing more than two speakers per dialogue was missing. Thus, we propose the Multimodal EmotionLines Dataset (MELD), an extension and enhancement of EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues TV-series Friends. Each utterance annotated with...

10.18653/v1/p19-1050 article EN cc-by 2019-01-01

Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency. Proceedings of the 55th Annual Meeting Association for Computational Linguistics (Volume 1: Long Papers). 2017.

10.18653/v1/p17-1081 article EN cc-by Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2017-01-01

Emotion detection in conversations is a necessary step for number of applications, including opinion mining over chat history, social media threads, debates, argumentation mining, understanding consumer feedback live conversations, and so on. Currently systems do not treat the parties conversation individually by adapting to speaker each utterance. In this paper, we describe new method based on recurrent neural networks that keeps track individual party states throughout uses information...

10.1609/aaai.v33i01.33016818 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2019-07-17

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature creates distributional modality gaps pose significant challenges. In paper, we aim learn effective representations aid process fusion. We propose a novel framework, MISA, which projects each two distinct subspaces....

10.1145/3394171.3413678 article EN Proceedings of the 30th ACM International Conference on Multimedia 2020-10-12

Emotion recognition in conversations is crucial for the development of empathetic machines. Present methods mostly ignore role inter-speaker dependency relations while classifying emotions conversations. In this paper, we address recognizing utterance-level dyadic conversational videos. We propose a deep neural framework, termed memory network, which leverages contextual information from conversation history. The framework takes multimodal approach comprising audio, visual and textual...

10.18653/v1/n18-1193 article EN cc-by 2018-01-01

With the recent development of deep learning, research in AI has gained new vigor and prominence. While machine learning succeeded revitalizing many fields, such as computer vision, speech recognition, medical diagnosis, we are yet to witness impressive progress natural language understanding. One reasons behind this unmatched expectation is that, while a bottom-up approach feasible for pattern reasoning understanding often require top-down approach. In work, couple sub-symbolic symbolic...

10.1609/aaai.v32i1.11559 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2018-04-25

Emotion recognition in conversations is crucial for building empathetic machines. Present works this domain do not explicitly consider the inter-personal influences that thrive emotional dynamics of dialogues. To end, we propose Interactive COnversational memory Network (ICON), a multimodal emotion detection framework extracts features from conversational videos and hierarchically models self- inter-speaker into global memories. Such memories generate contextual summaries which aid...

10.18653/v1/d18-1280 article EN cc-by Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2018-01-01

We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning-based architectures classification, each improving upon the previous. Further, evaluate these multiple datasets fixed train/test partition. also discuss some major issues, frequently ignored in analysis research, e.g., role of speaker-exclusive models, importance modalities, and generalizability. This framework illustrates facets to be considered while...

10.1109/mis.2018.2882362 article EN IEEE Intelligent Systems 2018-11-01

Sentiment analysis as a field has come long way since it was first introduced task nearly 20 years ago. It widespread commercial applications in various domains like marketing, risk management, market research, and politics, to name few. Given its saturation specific subtasks — such sentiment polarity classification datasets, there is an underlying perception that this reached maturity. In article, we discuss by pointing out the shortcomings under-explored, yet key aspects of necessary...

10.1109/taffc.2020.3038167 article EN IEEE Transactions on Affective Computing 2020-11-16

Santiago Castro, Devamanyu Hazarika, Verónica Pérez-Rosas, Roger Zimmermann, Rada Mihalcea, Soujanya Poria. Proceedings of the 57th Annual Meeting Association for Computational Linguistics. 2019.

10.18653/v1/p19-1455 article EN cc-by 2019-01-01

Multimodal sentiment analysis involves identifying in videos and is a developing field of research. Unlike current works, which model utterances individually, we propose recurrent that able to capture contextual information among utterances. In this paper, also introduce attentionbased networks for improving both context learning dynamic feature fusion. Our shows 6-8% improvement over the state art on benchmark dataset.

10.1109/icdm.2017.134 article EN 2021 IEEE International Conference on Data Mining (ICDM) 2017-11-01

Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, example, sarcasm can flip the polarity of an "apparently positive" sentence and, hence, negatively affect performance. To date, most approaches to have treated primarily as text categorization problem. Sarcasm, however, be expressed in very subtle ways and requires deeper understanding that standard techniques cannot grasp. this work, we develop models based on pre-trained convolutional neural...

10.48550/arxiv.1610.08815 preprint EN cc-by arXiv (Cornell University) 2016-01-01

The literature in automated sarcasm detection has mainly focused on lexical, syntactic and semantic-level analysis of text. However, a sarcastic sentence can be expressed with contextual presumptions, background commonsense knowledge. In this paper, we propose CASCADE (a ContextuAl SarCasm DEtector) that adopts hybrid approach both content context-driven modeling for online social media discussions. For the latter, aims at extracting information from discourse discussion thread. Also, since...

10.48550/arxiv.1805.06413 preprint EN cc-by-sa arXiv (Cornell University) 2018-01-01

Devamanyu Hazarika, Soujanya Poria, Prateek Vij, Gangeshwar Krishnamurthy, Erik Cambria, Roger Zimmermann. Proceedings of the 2018 Conference North American Chapter Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). 2018.

10.18653/v1/n18-2043 article EN cc-by 2018-01-01

Cross-domain sentiment analysis has received significant attention in recent years, prompted by the need to combat domain gap between different applications that make use of analysis. In this paper, we take a novel perspective on task exploring role external commonsense knowledge. We introduce new framework, KinGDOM, which utilizes ConceptNet knowledge graph enrich semantics document providing both domain-specific and domain-general background concepts. These concepts are learned training...

10.18653/v1/2020.acl-main.292 article EN cc-by 2020-01-01
Coming Soon ...