- Machine Learning in Healthcare
- Mental Health via Writing
- Topic Modeling
- Artificial Intelligence in Healthcare and Education
- Mental Health Research Topics
- IoT and Edge/Fog Computing
- Stock Market Forecasting Methods
- Time Series Analysis and Forecasting
- Context-Aware Activity Recognition Systems
- Sentiment Analysis and Opinion Mining
- IoT-based Smart Home Systems
- Complex Network Analysis Techniques
- Forecasting Techniques and Applications
- Behavioral Health and Interventions
- Resilience and Mental Health
- Legal Education and Practice Innovations
- Data-Driven Disease Surveillance
- Data Stream Mining Techniques
- Substance Abuse Treatment and Outcomes
- Teaching and Learning Programming
- Psychological Testing and Assessment
- Computational and Text Analysis Methods
- Biomedical Text Mining and Ontologies
- Educational Tools and Methods
- Ethics in Business and Education
Stony Brook University
2020-2024
Sri Ramachandra Institute of Higher Education and Research
2019
Sri Sivasubramaniya Nadar College of Engineering
2019
Lion Foundation
2018
In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than standard 768+ hidden state sizes each layer within modern transformer-based language models, limiting ability to effectively leverage transformers. Here, we provide a systematic study on role dimension reduction methods (principal components analysis, factorization techniques, multi-layer auto-encoders) well dimensionality embedding vectors and sample...
Abstract In the most comprehensive population surveys, mental health is only broadly captured through questionnaires asking about “mentally unhealthy days” or feelings of “sadness.” Further, estimates are predominantly consolidated to yearly at state level, which considerably coarser than best physical health. Through large-scale analysis social media, robust estimation feasible finer resolutions. this study, we created a pipeline that used ~1 billion Tweets from 2 million geo-located users...
Current speech encoding pipelines often rely on separate processing between text and audio, not fully leveraging the inherent overlap these modalities for understanding human communication. Language models excel at capturing semantic meaning from that can complement additional prosodic, emotional, acoustic cues speech. This work bridges gap by proposing WhiSPA (Whisper with Semantic-Psychological Alignment), a novel audio encoder trained contrastive student-teacher learning objective. Using...
Implicit motives, nonconscious needs that influence individuals' behaviors and shape their emotions, have been part of personality research for nearly a century but differ from traits. The implicit motive assessment is very resource-intensive, involving expert coding written stories about ambiguous pictures, has hampered research. Using large language models machine learning techniques, we aimed to create high-quality are easy researchers use. We trained code the need power, achievement,...
Mohammadzaman Zamani, H. Andrew Schwartz, Johannes Eichstaedt, Sharath Chandra Guntuku, Adithya Virinchipuram Ganesan, Sean Clouston, Salvatore Giorgi. Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.
Background Unhealthy alcohol consumption is a severe public health problem. But low to moderate associated with high subjective well-being, possibly because commonly consumed socially together friends, who often are important for well-being. Disentangling the and social complexities of behavior has been difficult using traditional rating scales cross-section designs. We aim better understand these by examining individuals’ everyday affective well-being language, in addition scales, via both...
Very large language models (LLMs) perform extremely well on a spectrum of NLP tasks in zero-shot setting. However, little is known about their performance human-level problems which rely understanding psychological concepts, such as assessing personality traits. In this work, we investigate the ability GPT-3 to estimate Big 5 traits from users’ social media posts. Through set systematic experiments, find that somewhat close an existing pre-trained SotA for broad classification upon injecting...
Swanie Juhng, Matthew Matero, Vasudha Varadarajan, Johannes Eichstaedt, Adithya V Ganesan, H. Andrew Schwartz. Proceedings of the 61st Annual Meeting Association for Computational Linguistics (Volume 2: Short Papers). 2023.
Social science NLP tasks, such as emotion or humor detection, are required to capture the semantics along with implicit pragmatics from text, often limited amounts of training data. Instruction tuning has been shown improve many capabilities large language models (LLMs) commonsense reasoning, reading comprehension, and computer programming. However, little is known about effectiveness instruction on social domain where pragmatic cues needed be captured. We explore use for tasks introduce...
Implicit motives, non-conscious needs that influence individuals’ behavior and shape their emotions, have been part of personality research for nearly a century but are divergent from traits. The implicit motive assessment is very resource-intensive, involving expert-coding written stories about ambiguous pictures, which has hampered research. Using Large Language Models machine learning techniques, we aimed to create high-quality models easy researchers use. We trained code the need power,...
BACKGROUND: Modern Artificial Intelligence (AI) has shown promise in identifyingpsychopathology based on the language used by patients, providing a scalable method forobtaining relevant behavioral markers. However, no existing models for assessing posttraumaticstress disorder (PTSD) have successfully demonstrated out-of-sample replicability. We developa language-based AI model PTSD and rigorously evaluate replicability prospectivesample.METHODS: Participants from Stony Brook World Trade...
Self-reported rating scales have been central to social science for decades, even though language is our primary form of communication. We used analysis with machine learning compare self-reported ratings language-based responses experienced well-being (i.e., daily emotions) linked human traits, states, and behaviors. In a sample 764 U.S. service workers who reported their feelings up 12 weeks, were provided in 1) numerical ratings, 2) short essays, 3) descriptive words. When predicting nine...
Use of large language models such as ChatGPT (GPT-4) for mental health support has grown rapidly, emerging a promising route to assess and help people with mood disorders, like depression. However, we have limited understanding GPT-4's schema that is, how it internally associates interprets symptoms. In this work, leveraged contemporary measurement theory decode GPT-4 interrelates depressive symptoms inform both clinical utility theoretical understanding. We found assessment depression: (a)...
Artificial intelligence-based language generators are now a part of most people's lives. However, by default, they tend to generate "average" without reflecting the ways in which people differ. Here, we propose lightweight modification standard model transformer architecture - "PsychAdapter" that uses empirically derived trait-language patterns natural for specified personality, demographic, and mental health characteristics (with or prompting). We applied PsychAdapters modify OpenAI's...
Adithya V Ganesan, Vasudha Varadarajan, Juhi Mittal, Shashanka Subrahmanya, Matthew Matero, Nikita Soni, Sharath Chandra Guntuku, Johannes Eichstaedt, H. Andrew Schwartz. Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology. 2022.
Compared to physical health, population mental health measurement in the U.S. is very coarse-grained. Currently, largest surveys, such as those carried out by Centers for Disease Control or Gallup, only broadly captured through "mentally unhealthy days" "sadness", and limited relatively infrequent state metropolitan estimates. Through large scale analysis of social media data, robust estimation feasible at much higher resolutions, up weekly estimates counties. In present work, we validate a...
Very large language models (LLMs) perform extremely well on a spectrum of NLP tasks in zero-shot setting. However, little is known about their performance human-level problems which rely understanding psychological concepts, such as assessing personality traits. In this work, we investigate the ability GPT-3 to estimate Big 5 traits from users' social media posts. Through set systematic experiments, find that somewhat close an existing pre-trained SotA for broad classification upon injecting...
In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than standard 768+ hidden state sizes each layer within modern transformer-based language models, limiting ability to effectively leverage transformers. Here, we provide a systematic study on role dimension reduction methods (principal components analysis, factorization techniques, multi-layer auto-encoders) well dimensionality embedding vectors and sample...