- Multisensory perception and integration
- Neural dynamics and brain function
- Functional Brain Connectivity Studies
- Visual perception and processing mechanisms
- EEG and Brain-Computer Interfaces
- Memory and Neural Mechanisms
- Olfactory and Sensory Function Studies
- Neuroscience and Music Perception
- Neural and Behavioral Psychology Studies
- Tactile and Sensory Interactions
- Child and Animal Learning Development
- Neural Networks and Applications
- Categorization, perception, and language
- Language, Metaphor, and Cognition
- Fault Detection and Control Systems
- Face Recognition and Perception
- Animal Behavior and Reproduction
- Blind Source Separation Techniques
- Primate Behavior and Ecology
- Color perception and design
- Behavioral and Psychological Studies
- Face recognition and analysis
- Spatial Cognition and Navigation
- Tensor decomposition and applications
- Advanced Neuroimaging Techniques and Applications
California University of Pennsylvania
2023-2024
University of Pennsylvania
2022-2024
Baylor College of Medicine
2015-2023
Rush University
2022
The University of Texas Health Science Center at Houston
2015
Auburn University
2008-2014
The University of Texas Health Science Center at San Antonio
2013
The University of Texas at Austin
2013
James Madison University
2011
A visual cortical prosthesis (VCP) has long been proposed as a strategy for restoring useful vision to the blind, under assumption that percepts of small spots light produced with electrical stimulation cortex (phosphenes) will combine into coherent forms, like pixels on video screen. We tested an alternative in which shapes were traced surface by stimulating electrodes dynamic sequence. In both sighted and blind participants, enabled accurate recognition letter predicted brain's spatial map...
Meningiomas account for one-third of all primary brain tumors. Although typically benign, about 20% meningiomas are aggressive, and despite the rigor current histopathological classification system there remains considerable uncertainty in predicting tumor behavior. Here, we analyzed 160 tumors from 3 World Health Organization (WHO) grades (I through III) using clinical, gene expression, sequencing data. Unsupervised clustering analysis identified molecular types (A, B, C) that reliably...
During speech perception, humans integrate auditory information from the voice with visual face. This multisensory integration increases perceptual precision, but only if two cues come same talker; this requirement has been largely ignored by current models of perception. We describe a generative model perception that includes critical step determining likelihood and face have common cause. A key feature is it based on principled analysis how an observer should solve causal inference problem...
Audiovisual speech integration combines information from auditory (talker's voice) and visual mouth movements) to improve perceptual accuracy. However, if the emanate different talkers, decreases Therefore, a key step in audiovisual perception is deciding whether have same source, process known as causal inference. A well-known illusion, McGurk Effect, consists of incongruent syllables, such "ba" + "ga" (AbaVga), that are integrated produce fused percept ("da"). This illusion raises two...
Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if face visible, especially degraded as occurs in noisy environments or with hearing loss. We explored neural substrates of audiovisual perception electrocorticography, direct recording activity electrodes implanted on cortical surface. observed a double dissociation responses to clear and component within superior temporal gyrus (STG), region long known important for...
Visual information about speech content from the talker's mouth is often available before auditory voice. Here we examined perceptual and neural responses to words with without this visual head start. For both types of words, perception was enhanced by viewing face, but enhancement significantly greater for a Neural were measured electrodes implanted over association cortex in posterior superior temporal gyrus (pSTG) epileptic patients. The presence suppressed speech, more so We suggest that...
Abstract Obsessive-compulsive disorder (OCD) affects 2–3% of the population. One-third patients are poorly responsive to conventional therapies, and for a subgroup, gamma knife capsulotomy (GKC) is an option. We examined lesion characteristics in previously treated with GKC through well-established programs Providence, RI (Butler Hospital/Rhode Island Hospital/Alpert Medical School Brown University) São Paulo, Brazil (University Paolo). Lesions were traced on T1 images from 26 who had...
In the McGurk effect, presentation of incongruent auditory and visual speech evokes a fusion percept different than either component modality. We show that repeatedly experiencing effect for 14 days induces change in auditory-only perception: stimulus begins to evoke percept, even when presented on its own without accompanying speech. This perceptual change, termed fusion-induced recalibration (FIR), was talker-specific syllable-specific persisted year or more some participants any...
INTRODUCTION: Surgical removal of the epileptogenic zone (EZ) after iEEG evaluation is most successful approach for achieving seizure control in drug-resistant patients. Although automated algorithms EZ localization hold potential, issues like technical complexity and reproducibility hurdles limit their clinical adoption. METHODS: We developed a module called FREEZ (Frequency Range Explorer to assist Epileptogenic Zone localization) within open-source platform RAVE, enabling visualization...
Introduction Deep brain stimulation (DBS) is a promising treatment for refractory depression, utilizing surgically implanted electrodes to stimulate specific anatomical targets within the brain. However, limitations of patient-reported and clinician-administered mood assessments pose obstacles in evaluating DBS efficacy. In this study, we investigated whether an affective bias task, which leverages inherent negative interpretation seen individuals with could serve as reliable measure changes...
Humans combine the visual information from mouth movements with auditory voice to recognize speech. A common method for assessing audiovisual speech perception is McGurk effect: when presented some incongruent pairings of and syllables (e.g., sound “ba” dubbed onto “ga”) individuals perceive a third syllable, distinct components. The many differences between Chinese American culture language suggest possibility group in effect. Published studies have reported less effect native Mandarin...
Direct recording of neural activity from the human brain using implanted electrodes (iEEG, intracranial electroencephalography) is a fast-growing technique in neuroscience. While ability to record with high spatial and temporal resolution has advanced our understanding, it generates staggering amounts data: single patient can be hundreds electrodes, each sampled thousands times second for hours or days. The difficulty exploring these vast datasets rate-limiting step discovery. To overcome...
Abstract Emotion is represented in limbic and prefrontal brain areas, herein termed the affective salience network (ASN). Within ASN, there are substantial unknowns about how valence emotional intensity processed—specifically, which nodes associated with bias (a phenomenon participants interpret emotions a manner consistent their own mood). A recently developed feature detection approach (‘specparam’) was used to select dominant spectral features from human intracranial electrophysiological...
The prevalence of synthetic talking faces in both commercial and academic environments is increasing as the technology to generate them grows more powerful available. While it has long been known that seeing face talker improves human perception speech-in-noise, recent studies have shown generated by deep neural networks (DNNs) are also able improve speech-in-noise. However, previous benefit provided DNN was only about half real talkers. We sought determine whether an alternative method...
The ability to learn abstract relational concepts is fundamental higher level cognition. In contrast item-specific (e.g. pictures containing trees versus cars), are not bound particular stimulus features, but instead involve the relationship between stimuli and therefore may be extrapolated novel stimuli. Previous research investigating same/different concept has suggested that primates might specially adapted extract relations among items would require fewer exemplars of a rule an than...
Corvids (birds of the family Corvidae) display intelligent behavior previously ascribed only to primates, but such feats are not directly comparable across species. To make direct species comparisons, we used a same/different task in laboratory assess abstract-concept learning black-billed magpies ( Pica hudsonia). Concept was tested with novel pictures after training. improved training-set size, and test accuracy eventually matched training accuracy—full concept learning—with 128-picture...
Abstract The McGurk effect is a popular assay of multisensory integration in which participants report the illusory percept “da” when presented with incongruent auditory “ba” and visual “ga” (AbaVga). While original publication describing found that 98% perceived it, later studies reported much lower prevalence, ranging from 17% to 81%. Understanding source this variability important for interpreting panoply examine prevalence between groups, including clinical populations such as...
Experimentalists studying multisensory integration compare neural responses to stimuli with the component modalities presented in isolation. This procedure is problematic for speech perception since audiovisual and auditory-only are easily intelligible but visual-only not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual always contained both auditory visual speech, jittering onset asynchrony between allowed time course of unisensory...
The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity as tool for understanding First, there high variability in perception across different stimuli and observers. Second, observers low correlation between susceptibility recognition visual paired with auditory speech-in-noise, another common integration. Using framework causal inference (CIMS) model, we explored relationship effect, syllable...