- Hearing Loss and Rehabilitation
- Noise Effects and Management
- Neural dynamics and brain function
- Neuroscience and Music Perception
- EEG and Brain-Computer Interfaces
- Fiber-reinforced polymer composites
- Speech and Audio Processing
- Neural Networks and Applications
- Hearing, Cochlea, Tinnitus, Genetics
- Neural and Behavioral Psychology Studies
- Graphene research and applications
- biodegradable polymer synthesis and properties
- Flame retardant materials and properties
- Blind Source Separation Techniques
- Education, Safety, and Science Studies
- Recycling and Waste Management Techniques
- Electrospun Nanofibers in Biomedical Applications
- Tactile and Sensory Interactions
- Supercapacitor Materials and Fabrication
- Multisensory perception and integration
- Fuel Cells and Related Materials
- Advanced Adaptive Filtering Techniques
Montclair State University
2022-2024
Purdue University West Lafayette
2020-2023
University of Iowa
2016-2020
Jeonbuk National University
2013
Older listeners have difficulty understanding speech in unfavorable listening conditions. To compensate for acoustic degradation, cognitive processing skills, such as working memory, need to be engaged. Despite prior findings on the association between memory and recognition various conditions, it is not yet clear whether modality of stimuli presentation tasks should auditory or visual. Given modality-specific characteristics we hypothesized that capacity could predict performance adverse...
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There variance individuals' ability to understand SiN cannot be explained by simple hearing profiles, which suggests central factors may underlie the ability. Here, we elucidated few functions involved during and their contributions individual using both within- across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, investigated how acoustic...
Selective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Emerging evidence exhibits a large variance in attentional control during SiN tasks, even among normal-hearing listeners. Yet whether training enhance the efficacy of and, if so, effects transferred performance on task has not been explicitly studied. Here, we introduce neurofeedback paradigm designed reinforce...
Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance SiN normal (NH) subjects. The present study examined predictors of large cohort cochlear-implant (CI) users.We recorded electroencephalography 114 postlingually...
Polyacrylonitrile (PAN)-based ultrafine fibers and carbon were produced by wet-spinning, the crystal sizes thermal mechanical properties of investigated. Scanning electron microscopy revealed that superfine fibrils in surfaces PAN/polyvinyl acetate (PVA) blend increased slightly with increasing PAN content before removal PVA. Differential scanning calorimetry indicated PVA do not mix and, therefore, each maintains their inherent characteristics. The prepared removing water at 5 wt % water....
Noise reduction (NR) algorithms are employed in nearly all commercially available hearing aids to attenuate background noise. However, NR processing also involves undesirable speech distortions, leading variability outcomes among individuals with different noise tolerance. Leveraging 30 participants normal engaged speech-in-noise tasks, the present study examined whether cortical measure of neural signal-to-noise ratio (SNR)—the amplitude auditory evoked responses target onset and...
Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding how NR affects speech-in-noise perception and why its effect is variable limited. The current study aimed to (1) characterize on neural processing target speech (2) seek determinants individual differences performance, hypothesizing that an individual's own capability inhibit background would inversely predict benefits perception.Thirty-six adult listeners with normal...
Polyacrylonitrile (PAN) copolymers with different methyl acrylate (MA) contents were synthesized via solution polymerization and used as precursors for high-performance PAN ultrafine fibrids. The chemical structures of the characterized using Fourier-transform infrared spectroscopy <TEX>$^{13}C$</TEX> nuclear magnetic resonance spectroscopy. Their particle sizes aspect ratios increased increasing viscosity, degree crystallinity decreasing concentration copolymer solution. In contrast, their...
Selective attention can be a useful tactic for speech-in-noise (SiN) interpretation as it strengthens cortical responses to attended sensory inputs while suppressing others. This process is referred attentional modulation. Our earlier study showed that neurofeedback training paradigm was effective improving the modulation of auditory evoked responses. However, unclear how such improved paper attempts unveil what neural mechanisms underlie strengthened selective during paradigm. EEG...
Abstract Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There variance individuals’ ability to understand SiN cannot be explained by simple hearing profiles, which suggests central factors may underlie the ability. Here, we elucidated few functions involved during and their contributions individual using both within- across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, investigated how...
Abstract Perceptual benefits from digital noise reduction (NR) vary among individuals with different tolerance and sensitivity to distortions introduced in NR-processed speech; however, the physiological bases of variance are understudied. Here, we developed objective measures speech encoding ascending pathway as candidate individual using brainstem responses syllable /da/. The speech-evoked response was found be sensitive addition NR processing. effects on consonant vowel portion were...
Purpose: This study aimed to compare objective speech recognition and subjective hearing handicap outcomes as a function of degree loss. Methods: 120 elderly listeners participated, ranging in age from 60-83 years. Listeners’ degrees loss were derived corresponding newly proposed World Health Organization impairment grading system. As outcomes, word sentence scores (WRS, SRS) quiet measured at an individually determined most comfortable level. The SRS noise obtained 0 dB signal-to-noise...
Abstract Selective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Here, we introduce a training paradigm designed reinforce attentional modulation of auditory evoked responses. Subjects one two speech streams our EEG-based decoder provided online feedback. After four weeks this neurofeedback training, subjects exhibited enhanced response target and improved performance...
Abstract Objectives Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work our group (Kim et al., 2021, Neuroimage ) highlighted central neural factors underlying the variance SiN normal (NH) subjects. The current study examined predictors of speech-in-noise large cohort cochlear-implant (CI) users, with long-term goal...
Selective attention is a fundamental process for our communication in noisy everyday environments. Previous electrophysiologic studies have shown that selective modulates the neural representation of auditory scene, enhancing responses to target sound while suppressing background. Given most cochlear implant (CI) users complain about difficulty understanding speech within background noise, we investigated whether CI users' performance and underlying processes are 1) degraded compared...
Speech-in-noise (SiN) understanding involves multiple cortical processes including feature extraction, grouping, and selective attention. Among those processes, we aimed to investigate the causal relationship between auditory attention SiN performance. Selective enhances strength of neural responses attended sounds while suppresses ignored sounds, which forms an evidence sensory gain control theory. We hypothesized that response-guided neurofeedback could strengthen control, in turn will...
Noise reduction (NR) has been widely used in hearing aids (HAs) to increase ease and comfort of listening reduce effort. However, NR attenuates noise at the potential cost distorting speech cues. This makes it challenging for audiologists select best configuration during HA fitting process. The long-term goal our research is optimize by characterizing neural mechanisms underlying effect NR. purpose present study examine on cortical dynamics speech-in-noise tasks users using...
Understanding speech in background noise is a crucial function for communication. Despite the growing body of research on this topic, it still unexplained how neural processes spoken-word recognition are affected by acoustic degradation target speech. To address question, we utilized high-density EEG simultaneously measured during word identification task with varying levels noise. We hypothesize that, if degrades sound, listeners will exhibit less immediate processing information, waiting...
We designed a neurofeedback training paradigm to enhance the attentional modulation of cortical auditory evoked responses. Two concurrent speech streams—a female voice repeating “Up” five times and male “Down” four in three secondsi—were played from left right loudspeakers, respectively. Subjects were instructed attend one those streams by pre-stimulus visual cue (e.g., “Target: Up”). Attention was decoded single-trial EEG signals. received (i.e., object on computer screen moves upward if...