- Neurobiology of Language and Bilingualism
- Topic Modeling
- Reading and Literacy Development
- Neuroscience and Music Perception
- Natural Language Processing Techniques
- Language Development and Disorders
- Neural dynamics and brain function
- Hearing Loss and Rehabilitation
- Phonetics and Phonology Research
- Cognitive and developmental aspects of mathematical skills
- Advanced Text Analysis Techniques
- EEG and Brain-Computer Interfaces
- Scientific Research and Philosophical Inquiry
- Advanced Malware Detection Techniques
- Text Readability and Simplification
- Hate Speech and Cyberbullying Detection
- Computational and Text Analysis Methods
- Blind Source Separation Techniques
- Neural Networks and Applications
- Action Observation and Synchronization
- Statistical and numerical algorithms
- Simulation Techniques and Applications
- Categorization, perception, and language
- Media Influence and Health
- Software Engineering Research
The Scarborough Hospital
2022-2025
University of Toronto
2022-2025
University of Michigan
2023
University of Maryland, Baltimore
2022
University of Maryland, College Park
2021-2022
Research Institute for Advanced Computer Science
2022
University of Baltimore
2022
Cornell University
2018-2020
Speech processing is highly incremental. It widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive integrated with bottom-up sensory input: Classic psycholinguistic paradigms suggest two-stage process, in which acoustic input initially leads local, context-independent representations, are then quickly contextual constraints....
Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain responses...
Efforts to understand the brain bases of language face Mapping Problem: At what level do linguistic computations and representations connect human neurobiology? We review one approach this problem that relies on rigorously defined computational models specify links between features neural signals. Such tools can be used estimate predictions, model features, a sequence processing steps may quantitatively fit signals collected while participants use language. Progress has been helped by...
This study examines memory retrieval and syntactic composition using fMRI while participants listen to a book, The Little Prince. These two processes are quantified drawing on methods from computational linguistics. Memory is via multi-word expressions that likely be stored as unit, rather than built-up compositionally. Syntactic bottom-up parsing tracks tree-building work needed in composed phrases. Regression analyses localise these spatially-distinct brain regions. Composition mainly...
1 Abstract Even though human experience unfolds continuously in time, it is not strictly linear; instead, entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a varying acoustic signal into phonemes, words, and meaning, these levels all have distinct but interdependent temporal Time-lagged regression using response functions (TRFs) has recently emerged as promising tool for disentangling electrophysiological brain...
Neuroimaging using more ecologically valid stimuli such as audiobooks has advanced our understanding of natural language comprehension in the brain. However, prior naturalistic have typically been restricted to a single language, which limited generalizability beyond small typological domains. Here we present Le Petit Prince fMRI Corpus (LPPC-fMRI), multilingual resource for research cognitive neuroscience speech and during listening (OpenNeuro: ds003643). 49 English speakers, 35 Chinese...
When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory are enhanced by linguistic content. Here, recorded magnetoencephalography (MEG) while subjects of both sexes listened four types continuous-speech-like passages: speech-envelope modulated noise, English-like non-words, scrambled words, and a narrative passage. Temporal response function (TRF) analysis provides strong neural evidence for the...
Computational approaches to prediction in online sentence processing tend be dominated by computation-level surprisal theory, offering few insights into underlying cognitive mechanisms. Conversely, predictive coding is an algorithmic theory grounded neuroscience, but it has rarely been employed the study of language processing, part because its areas application have not involved sequential processing. Building on a recently proposed temporal model, we present what our knowledge first...
Shohini Bhattasali, Jeremy Cytryn, Elana Feldman, Joonsuk Park. Proceedings of the 53rd Annual Meeting Association for Computational Linguistics and 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). 2015.
Miloš Stanojević, Shohini Bhattasali, Donald Dunagan, Luca Campanelli, Mark Steedman, Jonathan Brennan, John Hale. Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. 2021.
ABSTRACT Neuroimaging using more ecologically valid stimuli such as audiobooks has advanced our understanding of natural language comprehension in the brain. However, prior naturalistic have typically been restricted to a single language, which limited generalizability beyond small typological domains. Here we present Le Petit Prince fMRI Corpus (LPPC–fMRI), multilingual resource for research cognitive neuroscience speech and during listening (Open-Neuro: ds003643). 49 English speakers, 35...
Abstract Are the brain bases of language comprehension same across all human languages, or do these vary in a way that corresponds to differences linguistic typology? English and Mandarin Chinese attest such typological difference domain relative clauses. Using functional magnetic resonance imaging with participants, who listened translation-equivalent story, we analyzed neuroimages time aligned object-extracted clauses both languages. In general linear model analysis naturalistic data, was...
Abstract Speech processing is highly incremental. It widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive integrated with bottom-up sensory input: Classic psycholinguistic paradigms suggest two-stage process, in which acoustic input initially leads local, context-independent representations, are then quickly contextual...
Intellectual property (IP) theft is a growing problem. We build on prior work to deter IP by generating n fake versions of technical document so thief has expend time and effort in identifying the correct document. Our new SbFAKE framework proposes, for first time, novel combination language processing, optimization, psycholinguistic concept surprisal generate set such fakes. start combining psycholinguistic-based scores optimization two bilevel problems (an Explicit one simpler Implicit...
Context guides comprehenders' expectations during language processing, and informationtheoretic surprisal is commonly used as an index of cognitive processing effort.However, prior work using has considered only within-sentence context, n-grams, neural models, or syntactic structure conditioning context.In this paper, we extend the approach to use broader topical investigating influence local context on via analysis fMRI time courses collected naturalistic listening.Lexical calculated from...
When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory are modulated by linguistic content. Here, recorded magnetoencephalography (MEG) while subjects listened four types of continuous-speech-like passages: speech-envelope noise, English-like non-words, scrambled words, and narrative passage. Temporal response function (TRF) analysis provides strong neural evidence for the emergent features speech...
Abstract One aspect of natural language comprehension is understanding how many what or whom a speaker referring to. While previous work has documented the neural correlates general number and quantity comparison, we investigate semantic from cross-linguistic perspective with goal identifying cortical regions involved in distinguishing plural singular nouns. We use three fMRI datasets which Chinese, French, English native speakers listen to an audiobook children’s story their language....